00:00:00.001 Started by upstream project "autotest-per-patch" build number 126252 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.222 > git --version # 'git version 2.39.2' 00:00:00.222 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.239 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.239 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.619 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.629 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.639 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:05.639 > git config core.sparsecheckout # timeout=10 00:00:05.650 > git read-tree -mu HEAD # timeout=10 00:00:05.666 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:05.687 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:05.687 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:05.775 [Pipeline] Start of Pipeline 00:00:05.786 [Pipeline] library 00:00:05.788 Loading library shm_lib@master 00:00:05.788 Library shm_lib@master is cached. Copying from home. 00:00:05.800 [Pipeline] node 00:00:05.811 Running on WFP5 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:05.812 [Pipeline] { 00:00:05.821 [Pipeline] catchError 00:00:05.822 [Pipeline] { 00:00:05.831 [Pipeline] wrap 00:00:05.838 [Pipeline] { 00:00:05.843 [Pipeline] stage 00:00:05.844 [Pipeline] { (Prologue) 00:00:06.013 [Pipeline] sh 00:00:06.297 + logger -p user.info -t JENKINS-CI 00:00:06.317 [Pipeline] echo 00:00:06.319 Node: WFP5 00:00:06.326 [Pipeline] sh 00:00:06.619 [Pipeline] setCustomBuildProperty 00:00:06.629 [Pipeline] echo 00:00:06.630 Cleanup processes 00:00:06.634 [Pipeline] sh 00:00:06.910 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.910 1182881 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:06.925 [Pipeline] sh 00:00:07.211 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.211 ++ grep -v 'sudo pgrep' 00:00:07.211 ++ awk '{print $1}' 00:00:07.211 + sudo kill -9 00:00:07.211 + true 00:00:07.225 [Pipeline] cleanWs 00:00:07.235 [WS-CLEANUP] Deleting project workspace... 00:00:07.235 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.241 [WS-CLEANUP] done 00:00:07.245 [Pipeline] setCustomBuildProperty 00:00:07.259 [Pipeline] sh 00:00:07.539 + sudo git config --global --replace-all safe.directory '*' 00:00:07.665 [Pipeline] httpRequest 00:00:07.698 [Pipeline] echo 00:00:07.699 Sorcerer 10.211.164.101 is alive 00:00:07.709 [Pipeline] httpRequest 00:00:07.713 HttpMethod: GET 00:00:07.713 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.714 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.724 Response Code: HTTP/1.1 200 OK 00:00:07.725 Success: Status code 200 is in the accepted range: 200,404 00:00:07.725 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:09.708 [Pipeline] sh 00:00:09.988 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:10.003 [Pipeline] httpRequest 00:00:10.035 [Pipeline] echo 00:00:10.036 Sorcerer 10.211.164.101 is alive 00:00:10.045 [Pipeline] httpRequest 00:00:10.049 HttpMethod: GET 00:00:10.050 URL: http://10.211.164.101/packages/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:00:10.051 Sending request to url: http://10.211.164.101/packages/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:00:10.066 Response Code: HTTP/1.1 200 OK 00:00:10.067 Success: Status code 200 is in the accepted range: 200,404 00:00:10.067 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:01:49.497 [Pipeline] sh 00:01:49.778 + tar --no-same-owner -xf spdk_00bf4c5711d9237bcd47348a985d87b3989f6939.tar.gz 00:01:52.320 [Pipeline] sh 00:01:52.601 + git -C spdk log --oneline -n5 00:01:52.602 00bf4c571 scripts/perf: Include per-node hugepages stats in collect-vmstat 00:01:52.602 958a93494 scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default 00:01:52.602 a95bbf233 blob: set parent_id properly on spdk_bs_blob_set_external_parent. 00:01:52.602 248c547d0 nvmf/tcp: add option for selecting a sock impl 00:01:52.602 2d30d9f83 accel: introduce tasks in sequence limit 00:01:52.615 [Pipeline] } 00:01:52.632 [Pipeline] // stage 00:01:52.642 [Pipeline] stage 00:01:52.645 [Pipeline] { (Prepare) 00:01:52.669 [Pipeline] writeFile 00:01:52.693 [Pipeline] sh 00:01:52.980 + logger -p user.info -t JENKINS-CI 00:01:52.997 [Pipeline] sh 00:01:53.279 + logger -p user.info -t JENKINS-CI 00:01:53.292 [Pipeline] sh 00:01:53.575 + cat autorun-spdk.conf 00:01:53.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.575 SPDK_TEST_NVMF=1 00:01:53.575 SPDK_TEST_NVME_CLI=1 00:01:53.575 SPDK_TEST_NVMF_NICS=mlx5 00:01:53.575 SPDK_RUN_UBSAN=1 00:01:53.575 NET_TYPE=phy 00:01:53.583 RUN_NIGHTLY=0 00:01:53.589 [Pipeline] readFile 00:01:53.624 [Pipeline] withEnv 00:01:53.627 [Pipeline] { 00:01:53.644 [Pipeline] sh 00:01:53.927 + set -ex 00:01:53.927 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:53.927 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:53.927 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.927 ++ SPDK_TEST_NVMF=1 00:01:53.927 ++ SPDK_TEST_NVME_CLI=1 00:01:53.927 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:53.927 ++ SPDK_RUN_UBSAN=1 00:01:53.927 ++ NET_TYPE=phy 00:01:53.927 ++ RUN_NIGHTLY=0 00:01:53.927 + case $SPDK_TEST_NVMF_NICS in 00:01:53.927 + DRIVERS=mlx5_ib 00:01:53.927 + [[ -n mlx5_ib ]] 00:01:53.927 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:53.927 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:00.495 rmmod: ERROR: Module irdma is not currently loaded 00:02:00.496 rmmod: ERROR: Module i40iw is not currently loaded 00:02:00.496 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:00.496 + true 00:02:00.496 + for D in $DRIVERS 00:02:00.496 + sudo modprobe mlx5_ib 00:02:00.496 + exit 0 00:02:00.504 [Pipeline] } 00:02:00.524 [Pipeline] // withEnv 00:02:00.529 [Pipeline] } 00:02:00.543 [Pipeline] // stage 00:02:00.551 [Pipeline] catchError 00:02:00.552 [Pipeline] { 00:02:00.566 [Pipeline] timeout 00:02:00.566 Timeout set to expire in 1 hr 0 min 00:02:00.568 [Pipeline] { 00:02:00.582 [Pipeline] stage 00:02:00.585 [Pipeline] { (Tests) 00:02:00.600 [Pipeline] sh 00:02:00.881 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:02:00.881 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:02:00.881 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:02:00.881 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:02:00.881 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:00.881 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:02:00.881 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:02:00.881 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:00.881 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:02:00.881 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:02:00.881 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:02:00.881 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:02:00.881 + source /etc/os-release 00:02:00.881 ++ NAME='Fedora Linux' 00:02:00.881 ++ VERSION='38 (Cloud Edition)' 00:02:00.881 ++ ID=fedora 00:02:00.881 ++ VERSION_ID=38 00:02:00.881 ++ VERSION_CODENAME= 00:02:00.881 ++ PLATFORM_ID=platform:f38 00:02:00.881 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:00.881 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.881 ++ LOGO=fedora-logo-icon 00:02:00.881 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:00.881 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.881 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:00.881 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.881 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.881 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.881 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:00.881 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.881 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:00.881 ++ SUPPORT_END=2024-05-14 00:02:00.881 ++ VARIANT='Cloud Edition' 00:02:00.881 ++ VARIANT_ID=cloud 00:02:00.881 + uname -a 00:02:00.881 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:00.881 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:02:03.468 Hugepages 00:02:03.468 node hugesize free / total 00:02:03.468 node0 1048576kB 0 / 0 00:02:03.468 node0 2048kB 0 / 0 00:02:03.468 node1 1048576kB 0 / 0 00:02:03.468 node1 2048kB 0 / 0 00:02:03.468 00:02:03.468 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:03.468 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:03.468 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:03.468 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:03.468 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:03.468 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:03.468 + rm -f /tmp/spdk-ld-path 00:02:03.468 + source autorun-spdk.conf 00:02:03.468 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.468 ++ SPDK_TEST_NVMF=1 00:02:03.468 ++ SPDK_TEST_NVME_CLI=1 00:02:03.468 ++ SPDK_TEST_NVMF_NICS=mlx5 00:02:03.468 ++ SPDK_RUN_UBSAN=1 00:02:03.468 ++ NET_TYPE=phy 00:02:03.469 ++ RUN_NIGHTLY=0 00:02:03.469 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:03.469 + [[ -n '' ]] 00:02:03.469 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:03.469 + for M in /var/spdk/build-*-manifest.txt 00:02:03.469 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:03.469 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:03.469 + for M in /var/spdk/build-*-manifest.txt 00:02:03.469 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:03.469 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:02:03.469 ++ uname 00:02:03.469 + [[ Linux == \L\i\n\u\x ]] 00:02:03.469 + sudo dmesg -T 00:02:03.469 + sudo dmesg --clear 00:02:03.469 + dmesg_pid=1184331 00:02:03.469 + [[ Fedora Linux == FreeBSD ]] 00:02:03.469 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.469 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:03.469 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:03.469 + [[ -x /usr/src/fio-static/fio ]] 00:02:03.469 + export FIO_BIN=/usr/src/fio-static/fio 00:02:03.469 + sudo dmesg -Tw 00:02:03.469 + FIO_BIN=/usr/src/fio-static/fio 00:02:03.469 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:03.469 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:03.469 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:03.469 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.469 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:03.469 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:03.469 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.469 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:03.469 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:02:03.469 Test configuration: 00:02:03.469 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:03.469 SPDK_TEST_NVMF=1 00:02:03.469 SPDK_TEST_NVME_CLI=1 00:02:03.469 SPDK_TEST_NVMF_NICS=mlx5 00:02:03.469 SPDK_RUN_UBSAN=1 00:02:03.469 NET_TYPE=phy 00:02:03.727 RUN_NIGHTLY=0 23:27:52 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:02:03.727 23:27:52 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:03.727 23:27:52 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:03.727 23:27:52 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:03.727 23:27:52 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.727 23:27:52 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.727 23:27:52 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.727 23:27:52 -- paths/export.sh@5 -- $ export PATH 00:02:03.727 23:27:52 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:03.727 23:27:52 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:02:03.727 23:27:52 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:03.727 23:27:52 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721078872.XXXXXX 00:02:03.727 23:27:52 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721078872.ucLhGP 00:02:03.727 23:27:52 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:03.727 23:27:52 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:03.727 23:27:52 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:02:03.727 23:27:52 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:03.727 23:27:52 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:03.727 23:27:52 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:03.727 23:27:52 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:02:03.727 23:27:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.727 23:27:52 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:02:03.727 23:27:52 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:03.727 23:27:52 -- pm/common@17 -- $ local monitor 00:02:03.727 23:27:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.727 23:27:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.727 23:27:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.727 23:27:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:03.727 23:27:52 -- pm/common@25 -- $ sleep 1 00:02:03.727 23:27:52 -- pm/common@21 -- $ date +%s 00:02:03.727 23:27:52 -- pm/common@21 -- $ date +%s 00:02:03.727 23:27:52 -- pm/common@21 -- $ date +%s 00:02:03.727 23:27:52 -- pm/common@21 -- $ date +%s 00:02:03.727 23:27:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078872 00:02:03.727 23:27:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078872 00:02:03.727 23:27:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078872 00:02:03.727 23:27:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721078872 00:02:03.727 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078872_collect-vmstat.pm.log 00:02:03.727 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078872_collect-cpu-load.pm.log 00:02:03.727 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078872_collect-cpu-temp.pm.log 00:02:03.727 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721078872_collect-bmc-pm.bmc.pm.log 00:02:04.662 23:27:53 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:04.662 23:27:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:04.662 23:27:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:04.662 23:27:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:04.662 23:27:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:04.662 Mon Jul 15 09:27:53 PM UTC 2024 00:02:04.662 23:27:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:04.662 v24.09-pre-211-g00bf4c571 00:02:04.662 23:27:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:04.662 23:27:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:04.662 23:27:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:04.662 23:27:53 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:02:04.662 23:27:53 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:02:04.662 23:27:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.662 ************************************ 00:02:04.662 START TEST ubsan 00:02:04.662 ************************************ 00:02:04.662 23:27:53 ubsan -- common/autotest_common.sh@1117 -- $ echo 'using ubsan' 00:02:04.662 using ubsan 00:02:04.662 00:02:04.662 real 0m0.000s 00:02:04.662 user 0m0.000s 00:02:04.662 sys 0m0.000s 00:02:04.662 23:27:53 ubsan -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:02:04.663 23:27:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:04.663 ************************************ 00:02:04.663 END TEST ubsan 00:02:04.663 ************************************ 00:02:04.663 23:27:53 -- common/autotest_common.sh@1136 -- $ return 0 00:02:04.663 23:27:53 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:04.663 23:27:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:04.663 23:27:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:04.663 23:27:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:02:04.920 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:04.920 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:05.179 Using 'verbs' RDMA provider 00:02:17.936 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:30.136 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:30.136 Creating mk/config.mk...done. 00:02:30.136 Creating mk/cc.flags.mk...done. 00:02:30.136 Type 'make' to build. 00:02:30.136 23:28:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:30.136 23:28:17 -- common/autotest_common.sh@1093 -- $ '[' 3 -le 1 ']' 00:02:30.136 23:28:17 -- common/autotest_common.sh@1099 -- $ xtrace_disable 00:02:30.136 23:28:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:30.136 ************************************ 00:02:30.136 START TEST make 00:02:30.136 ************************************ 00:02:30.136 23:28:17 make -- common/autotest_common.sh@1117 -- $ make -j96 00:02:30.136 make[1]: Nothing to be done for 'all'. 00:02:36.707 The Meson build system 00:02:36.707 Version: 1.3.1 00:02:36.707 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:36.707 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:36.707 Build type: native build 00:02:36.707 Program cat found: YES (/usr/bin/cat) 00:02:36.707 Project name: DPDK 00:02:36.707 Project version: 24.03.0 00:02:36.708 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:36.708 C linker for the host machine: cc ld.bfd 2.39-16 00:02:36.708 Host machine cpu family: x86_64 00:02:36.708 Host machine cpu: x86_64 00:02:36.708 Message: ## Building in Developer Mode ## 00:02:36.708 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:36.708 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:36.708 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.708 Program python3 found: YES (/usr/bin/python3) 00:02:36.708 Program cat found: YES (/usr/bin/cat) 00:02:36.708 Compiler for C supports arguments -march=native: YES 00:02:36.708 Checking for size of "void *" : 8 00:02:36.708 Checking for size of "void *" : 8 (cached) 00:02:36.708 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:36.708 Library m found: YES 00:02:36.708 Library numa found: YES 00:02:36.708 Has header "numaif.h" : YES 00:02:36.708 Library fdt found: NO 00:02:36.708 Library execinfo found: NO 00:02:36.708 Has header "execinfo.h" : YES 00:02:36.708 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:36.708 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.708 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.708 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.708 Run-time dependency openssl found: YES 3.0.9 00:02:36.708 Run-time dependency libpcap found: YES 1.10.4 00:02:36.708 Has header "pcap.h" with dependency libpcap: YES 00:02:36.708 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.708 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.708 Compiler for C supports arguments -Wformat: YES 00:02:36.708 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:36.708 Compiler for C supports arguments -Wformat-security: NO 00:02:36.708 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.708 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.708 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.708 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.708 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.708 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.708 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.708 Compiler for C supports arguments -Wundef: YES 00:02:36.708 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.708 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.708 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:36.708 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.708 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:36.708 Program objdump found: YES (/usr/bin/objdump) 00:02:36.708 Compiler for C supports arguments -mavx512f: YES 00:02:36.708 Checking if "AVX512 checking" compiles: YES 00:02:36.708 Fetching value of define "__SSE4_2__" : 1 00:02:36.708 Fetching value of define "__AES__" : 1 00:02:36.708 Fetching value of define "__AVX__" : 1 00:02:36.708 Fetching value of define "__AVX2__" : 1 00:02:36.708 Fetching value of define "__AVX512BW__" : 1 00:02:36.708 Fetching value of define "__AVX512CD__" : 1 00:02:36.708 Fetching value of define "__AVX512DQ__" : 1 00:02:36.708 Fetching value of define "__AVX512F__" : 1 00:02:36.708 Fetching value of define "__AVX512VL__" : 1 00:02:36.708 Fetching value of define "__PCLMUL__" : 1 00:02:36.708 Fetching value of define "__RDRND__" : 1 00:02:36.708 Fetching value of define "__RDSEED__" : 1 00:02:36.708 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.708 Fetching value of define "__znver1__" : (undefined) 00:02:36.708 Fetching value of define "__znver2__" : (undefined) 00:02:36.708 Fetching value of define "__znver3__" : (undefined) 00:02:36.708 Fetching value of define "__znver4__" : (undefined) 00:02:36.708 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:36.708 Message: lib/log: Defining dependency "log" 00:02:36.708 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.708 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.708 Checking for function "getentropy" : NO 00:02:36.708 Message: lib/eal: Defining dependency "eal" 00:02:36.708 Message: lib/ring: Defining dependency "ring" 00:02:36.708 Message: lib/rcu: Defining dependency "rcu" 00:02:36.708 Message: lib/mempool: Defining dependency "mempool" 00:02:36.708 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.708 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.708 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:36.708 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:36.708 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:36.708 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:36.708 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:36.708 Compiler for C supports arguments -mpclmul: YES 00:02:36.708 Compiler for C supports arguments -maes: YES 00:02:36.708 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.708 Compiler for C supports arguments -mavx512bw: YES 00:02:36.708 Compiler for C supports arguments -mavx512dq: YES 00:02:36.708 Compiler for C supports arguments -mavx512vl: YES 00:02:36.708 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.708 Compiler for C supports arguments -mavx2: YES 00:02:36.708 Compiler for C supports arguments -mavx: YES 00:02:36.708 Message: lib/net: Defining dependency "net" 00:02:36.708 Message: lib/meter: Defining dependency "meter" 00:02:36.708 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.708 Message: lib/pci: Defining dependency "pci" 00:02:36.708 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.708 Message: lib/hash: Defining dependency "hash" 00:02:36.708 Message: lib/timer: Defining dependency "timer" 00:02:36.708 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.708 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.708 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.708 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.708 Message: lib/power: Defining dependency "power" 00:02:36.708 Message: lib/reorder: Defining dependency "reorder" 00:02:36.708 Message: lib/security: Defining dependency "security" 00:02:36.708 Has header "linux/userfaultfd.h" : YES 00:02:36.708 Has header "linux/vduse.h" : YES 00:02:36.708 Message: lib/vhost: Defining dependency "vhost" 00:02:36.708 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:36.708 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.708 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.708 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.708 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:36.708 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:36.708 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:36.708 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:36.708 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:36.708 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:36.708 Program doxygen found: YES (/usr/bin/doxygen) 00:02:36.708 Configuring doxy-api-html.conf using configuration 00:02:36.708 Configuring doxy-api-man.conf using configuration 00:02:36.708 Program mandb found: YES (/usr/bin/mandb) 00:02:36.708 Program sphinx-build found: NO 00:02:36.708 Configuring rte_build_config.h using configuration 00:02:36.708 Message: 00:02:36.708 ================= 00:02:36.708 Applications Enabled 00:02:36.708 ================= 00:02:36.708 00:02:36.708 apps: 00:02:36.708 00:02:36.708 00:02:36.708 Message: 00:02:36.708 ================= 00:02:36.708 Libraries Enabled 00:02:36.708 ================= 00:02:36.708 00:02:36.708 libs: 00:02:36.708 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:36.708 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:36.708 cryptodev, dmadev, power, reorder, security, vhost, 00:02:36.708 00:02:36.708 Message: 00:02:36.708 =============== 00:02:36.708 Drivers Enabled 00:02:36.708 =============== 00:02:36.708 00:02:36.708 common: 00:02:36.708 00:02:36.708 bus: 00:02:36.708 pci, vdev, 00:02:36.708 mempool: 00:02:36.708 ring, 00:02:36.708 dma: 00:02:36.708 00:02:36.708 net: 00:02:36.708 00:02:36.708 crypto: 00:02:36.708 00:02:36.708 compress: 00:02:36.708 00:02:36.708 vdpa: 00:02:36.708 00:02:36.708 00:02:36.708 Message: 00:02:36.708 ================= 00:02:36.708 Content Skipped 00:02:36.708 ================= 00:02:36.708 00:02:36.708 apps: 00:02:36.708 dumpcap: explicitly disabled via build config 00:02:36.708 graph: explicitly disabled via build config 00:02:36.708 pdump: explicitly disabled via build config 00:02:36.708 proc-info: explicitly disabled via build config 00:02:36.708 test-acl: explicitly disabled via build config 00:02:36.708 test-bbdev: explicitly disabled via build config 00:02:36.708 test-cmdline: explicitly disabled via build config 00:02:36.708 test-compress-perf: explicitly disabled via build config 00:02:36.708 test-crypto-perf: explicitly disabled via build config 00:02:36.708 test-dma-perf: explicitly disabled via build config 00:02:36.708 test-eventdev: explicitly disabled via build config 00:02:36.708 test-fib: explicitly disabled via build config 00:02:36.708 test-flow-perf: explicitly disabled via build config 00:02:36.708 test-gpudev: explicitly disabled via build config 00:02:36.708 test-mldev: explicitly disabled via build config 00:02:36.708 test-pipeline: explicitly disabled via build config 00:02:36.708 test-pmd: explicitly disabled via build config 00:02:36.708 test-regex: explicitly disabled via build config 00:02:36.708 test-sad: explicitly disabled via build config 00:02:36.708 test-security-perf: explicitly disabled via build config 00:02:36.708 00:02:36.708 libs: 00:02:36.708 argparse: explicitly disabled via build config 00:02:36.708 metrics: explicitly disabled via build config 00:02:36.708 acl: explicitly disabled via build config 00:02:36.708 bbdev: explicitly disabled via build config 00:02:36.708 bitratestats: explicitly disabled via build config 00:02:36.708 bpf: explicitly disabled via build config 00:02:36.708 cfgfile: explicitly disabled via build config 00:02:36.708 distributor: explicitly disabled via build config 00:02:36.708 efd: explicitly disabled via build config 00:02:36.708 eventdev: explicitly disabled via build config 00:02:36.708 dispatcher: explicitly disabled via build config 00:02:36.708 gpudev: explicitly disabled via build config 00:02:36.708 gro: explicitly disabled via build config 00:02:36.708 gso: explicitly disabled via build config 00:02:36.708 ip_frag: explicitly disabled via build config 00:02:36.708 jobstats: explicitly disabled via build config 00:02:36.708 latencystats: explicitly disabled via build config 00:02:36.708 lpm: explicitly disabled via build config 00:02:36.708 member: explicitly disabled via build config 00:02:36.708 pcapng: explicitly disabled via build config 00:02:36.708 rawdev: explicitly disabled via build config 00:02:36.708 regexdev: explicitly disabled via build config 00:02:36.708 mldev: explicitly disabled via build config 00:02:36.708 rib: explicitly disabled via build config 00:02:36.708 sched: explicitly disabled via build config 00:02:36.708 stack: explicitly disabled via build config 00:02:36.708 ipsec: explicitly disabled via build config 00:02:36.708 pdcp: explicitly disabled via build config 00:02:36.708 fib: explicitly disabled via build config 00:02:36.708 port: explicitly disabled via build config 00:02:36.708 pdump: explicitly disabled via build config 00:02:36.708 table: explicitly disabled via build config 00:02:36.708 pipeline: explicitly disabled via build config 00:02:36.708 graph: explicitly disabled via build config 00:02:36.708 node: explicitly disabled via build config 00:02:36.708 00:02:36.708 drivers: 00:02:36.709 common/cpt: not in enabled drivers build config 00:02:36.709 common/dpaax: not in enabled drivers build config 00:02:36.709 common/iavf: not in enabled drivers build config 00:02:36.709 common/idpf: not in enabled drivers build config 00:02:36.709 common/ionic: not in enabled drivers build config 00:02:36.709 common/mvep: not in enabled drivers build config 00:02:36.709 common/octeontx: not in enabled drivers build config 00:02:36.709 bus/auxiliary: not in enabled drivers build config 00:02:36.709 bus/cdx: not in enabled drivers build config 00:02:36.709 bus/dpaa: not in enabled drivers build config 00:02:36.709 bus/fslmc: not in enabled drivers build config 00:02:36.709 bus/ifpga: not in enabled drivers build config 00:02:36.709 bus/platform: not in enabled drivers build config 00:02:36.709 bus/uacce: not in enabled drivers build config 00:02:36.709 bus/vmbus: not in enabled drivers build config 00:02:36.709 common/cnxk: not in enabled drivers build config 00:02:36.709 common/mlx5: not in enabled drivers build config 00:02:36.709 common/nfp: not in enabled drivers build config 00:02:36.709 common/nitrox: not in enabled drivers build config 00:02:36.709 common/qat: not in enabled drivers build config 00:02:36.709 common/sfc_efx: not in enabled drivers build config 00:02:36.709 mempool/bucket: not in enabled drivers build config 00:02:36.709 mempool/cnxk: not in enabled drivers build config 00:02:36.709 mempool/dpaa: not in enabled drivers build config 00:02:36.709 mempool/dpaa2: not in enabled drivers build config 00:02:36.709 mempool/octeontx: not in enabled drivers build config 00:02:36.709 mempool/stack: not in enabled drivers build config 00:02:36.709 dma/cnxk: not in enabled drivers build config 00:02:36.709 dma/dpaa: not in enabled drivers build config 00:02:36.709 dma/dpaa2: not in enabled drivers build config 00:02:36.709 dma/hisilicon: not in enabled drivers build config 00:02:36.709 dma/idxd: not in enabled drivers build config 00:02:36.709 dma/ioat: not in enabled drivers build config 00:02:36.709 dma/skeleton: not in enabled drivers build config 00:02:36.709 net/af_packet: not in enabled drivers build config 00:02:36.709 net/af_xdp: not in enabled drivers build config 00:02:36.709 net/ark: not in enabled drivers build config 00:02:36.709 net/atlantic: not in enabled drivers build config 00:02:36.709 net/avp: not in enabled drivers build config 00:02:36.709 net/axgbe: not in enabled drivers build config 00:02:36.709 net/bnx2x: not in enabled drivers build config 00:02:36.709 net/bnxt: not in enabled drivers build config 00:02:36.709 net/bonding: not in enabled drivers build config 00:02:36.709 net/cnxk: not in enabled drivers build config 00:02:36.709 net/cpfl: not in enabled drivers build config 00:02:36.709 net/cxgbe: not in enabled drivers build config 00:02:36.709 net/dpaa: not in enabled drivers build config 00:02:36.709 net/dpaa2: not in enabled drivers build config 00:02:36.709 net/e1000: not in enabled drivers build config 00:02:36.709 net/ena: not in enabled drivers build config 00:02:36.709 net/enetc: not in enabled drivers build config 00:02:36.709 net/enetfec: not in enabled drivers build config 00:02:36.709 net/enic: not in enabled drivers build config 00:02:36.709 net/failsafe: not in enabled drivers build config 00:02:36.709 net/fm10k: not in enabled drivers build config 00:02:36.709 net/gve: not in enabled drivers build config 00:02:36.709 net/hinic: not in enabled drivers build config 00:02:36.709 net/hns3: not in enabled drivers build config 00:02:36.709 net/i40e: not in enabled drivers build config 00:02:36.709 net/iavf: not in enabled drivers build config 00:02:36.709 net/ice: not in enabled drivers build config 00:02:36.709 net/idpf: not in enabled drivers build config 00:02:36.709 net/igc: not in enabled drivers build config 00:02:36.709 net/ionic: not in enabled drivers build config 00:02:36.709 net/ipn3ke: not in enabled drivers build config 00:02:36.709 net/ixgbe: not in enabled drivers build config 00:02:36.709 net/mana: not in enabled drivers build config 00:02:36.709 net/memif: not in enabled drivers build config 00:02:36.709 net/mlx4: not in enabled drivers build config 00:02:36.709 net/mlx5: not in enabled drivers build config 00:02:36.709 net/mvneta: not in enabled drivers build config 00:02:36.709 net/mvpp2: not in enabled drivers build config 00:02:36.709 net/netvsc: not in enabled drivers build config 00:02:36.709 net/nfb: not in enabled drivers build config 00:02:36.709 net/nfp: not in enabled drivers build config 00:02:36.709 net/ngbe: not in enabled drivers build config 00:02:36.709 net/null: not in enabled drivers build config 00:02:36.709 net/octeontx: not in enabled drivers build config 00:02:36.709 net/octeon_ep: not in enabled drivers build config 00:02:36.709 net/pcap: not in enabled drivers build config 00:02:36.709 net/pfe: not in enabled drivers build config 00:02:36.709 net/qede: not in enabled drivers build config 00:02:36.709 net/ring: not in enabled drivers build config 00:02:36.709 net/sfc: not in enabled drivers build config 00:02:36.709 net/softnic: not in enabled drivers build config 00:02:36.709 net/tap: not in enabled drivers build config 00:02:36.709 net/thunderx: not in enabled drivers build config 00:02:36.709 net/txgbe: not in enabled drivers build config 00:02:36.709 net/vdev_netvsc: not in enabled drivers build config 00:02:36.709 net/vhost: not in enabled drivers build config 00:02:36.709 net/virtio: not in enabled drivers build config 00:02:36.709 net/vmxnet3: not in enabled drivers build config 00:02:36.709 raw/*: missing internal dependency, "rawdev" 00:02:36.709 crypto/armv8: not in enabled drivers build config 00:02:36.709 crypto/bcmfs: not in enabled drivers build config 00:02:36.709 crypto/caam_jr: not in enabled drivers build config 00:02:36.709 crypto/ccp: not in enabled drivers build config 00:02:36.709 crypto/cnxk: not in enabled drivers build config 00:02:36.709 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.709 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.709 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.709 crypto/mlx5: not in enabled drivers build config 00:02:36.709 crypto/mvsam: not in enabled drivers build config 00:02:36.709 crypto/nitrox: not in enabled drivers build config 00:02:36.709 crypto/null: not in enabled drivers build config 00:02:36.709 crypto/octeontx: not in enabled drivers build config 00:02:36.709 crypto/openssl: not in enabled drivers build config 00:02:36.709 crypto/scheduler: not in enabled drivers build config 00:02:36.709 crypto/uadk: not in enabled drivers build config 00:02:36.709 crypto/virtio: not in enabled drivers build config 00:02:36.709 compress/isal: not in enabled drivers build config 00:02:36.709 compress/mlx5: not in enabled drivers build config 00:02:36.709 compress/nitrox: not in enabled drivers build config 00:02:36.709 compress/octeontx: not in enabled drivers build config 00:02:36.709 compress/zlib: not in enabled drivers build config 00:02:36.709 regex/*: missing internal dependency, "regexdev" 00:02:36.709 ml/*: missing internal dependency, "mldev" 00:02:36.709 vdpa/ifc: not in enabled drivers build config 00:02:36.709 vdpa/mlx5: not in enabled drivers build config 00:02:36.709 vdpa/nfp: not in enabled drivers build config 00:02:36.709 vdpa/sfc: not in enabled drivers build config 00:02:36.709 event/*: missing internal dependency, "eventdev" 00:02:36.709 baseband/*: missing internal dependency, "bbdev" 00:02:36.709 gpu/*: missing internal dependency, "gpudev" 00:02:36.709 00:02:36.709 00:02:36.709 Build targets in project: 85 00:02:36.709 00:02:36.709 DPDK 24.03.0 00:02:36.709 00:02:36.709 User defined options 00:02:36.709 buildtype : debug 00:02:36.709 default_library : shared 00:02:36.709 libdir : lib 00:02:36.709 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:36.709 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:36.709 c_link_args : 00:02:36.709 cpu_instruction_set: native 00:02:36.709 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:02:36.709 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:02:36.709 enable_docs : false 00:02:36.709 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.709 enable_kmods : false 00:02:36.709 max_lcores : 128 00:02:36.709 tests : false 00:02:36.709 00:02:36.709 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:37.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:37.279 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:37.279 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:37.279 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:37.279 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:37.279 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:37.279 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.279 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:37.279 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:37.279 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:37.279 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:37.279 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:37.279 [12/268] Linking static target lib/librte_kvargs.a 00:02:37.279 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:37.279 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:37.562 [15/268] Linking static target lib/librte_log.a 00:02:37.562 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:37.562 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:37.562 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:37.562 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.562 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:37.562 [21/268] Linking static target lib/librte_pci.a 00:02:37.562 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:37.562 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:37.562 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:37.821 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:37.821 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:37.821 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.821 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:37.821 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:37.821 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:37.821 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:37.821 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:37.821 [33/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:37.821 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:37.821 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.821 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:37.821 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:37.821 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.821 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:37.821 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.821 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:37.821 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.821 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.821 [44/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.821 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:37.821 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:37.821 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.821 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:37.821 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:37.821 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:37.821 [51/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.821 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.821 [53/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:37.821 [54/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:37.821 [55/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:37.821 [56/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:37.821 [57/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:37.821 [58/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:37.821 [59/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:37.821 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:37.821 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:37.821 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:37.821 [63/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:37.821 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:37.821 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.821 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.821 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:37.821 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:37.821 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:37.821 [70/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:37.821 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:37.821 [72/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:37.821 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:37.821 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:37.821 [75/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.821 [76/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:37.821 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:37.821 [78/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:37.821 [79/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.821 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:37.821 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:37.821 [82/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:37.821 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:37.821 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:37.821 [85/268] Linking static target lib/librte_meter.a 00:02:38.080 [86/268] Linking static target lib/librte_telemetry.a 00:02:38.080 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:38.080 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:38.080 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.080 [90/268] Linking static target lib/librte_ring.a 00:02:38.080 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.080 [92/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:38.081 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:38.081 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.081 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:38.081 [96/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:38.081 [97/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:38.081 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:38.081 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:38.081 [100/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.081 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:38.081 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:38.081 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:38.081 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:38.081 [105/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.081 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.081 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:38.081 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:38.081 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:38.081 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:38.081 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:38.081 [112/268] Linking static target lib/librte_net.a 00:02:38.081 [113/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:38.081 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:38.081 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.081 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:38.081 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:38.081 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:38.081 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:38.081 [120/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:38.081 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:38.081 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:38.081 [123/268] Linking static target lib/librte_mempool.a 00:02:38.081 [124/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:38.081 [125/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.081 [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:38.081 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:38.081 [128/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:38.081 [129/268] Linking static target lib/librte_cmdline.a 00:02:38.081 [130/268] Linking static target lib/librte_eal.a 00:02:38.081 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:38.081 [132/268] Linking static target lib/librte_rcu.a 00:02:38.081 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:38.081 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:38.081 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:38.081 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:38.340 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:38.340 [139/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [140/268] Linking target lib/librte_log.so.24.1 00:02:38.340 [141/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:38.340 [142/268] Linking static target lib/librte_timer.a 00:02:38.340 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:38.340 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [145/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [146/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:38.340 [147/268] Linking static target lib/librte_mbuf.a 00:02:38.340 [148/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.340 [149/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:38.340 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:38.340 [151/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.340 [152/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:38.340 [153/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:38.340 [154/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:38.340 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:38.340 [156/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:38.340 [157/268] Linking target lib/librte_kvargs.so.24.1 00:02:38.340 [158/268] Linking static target lib/librte_reorder.a 00:02:38.340 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.340 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.340 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:38.340 [163/268] Linking static target lib/librte_dmadev.a 00:02:38.340 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.340 [165/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:38.340 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:38.598 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:38.598 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:38.598 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:38.598 [170/268] Linking target lib/librte_telemetry.so.24.1 00:02:38.598 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.598 [172/268] Linking static target lib/librte_compressdev.a 00:02:38.598 [173/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.598 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:38.599 [175/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.599 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.599 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:38.599 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:38.599 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.599 [180/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:38.599 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:38.599 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.599 [183/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:38.599 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:38.599 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.599 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:38.599 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:38.599 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.599 [189/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:38.599 [190/268] Linking static target lib/librte_hash.a 00:02:38.599 [191/268] Linking static target lib/librte_power.a 00:02:38.599 [192/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.599 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.599 [194/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.599 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:38.599 [196/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.599 [197/268] Linking static target lib/librte_security.a 00:02:38.599 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.599 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.599 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.857 [201/268] Linking static target drivers/librte_bus_vdev.a 00:02:38.857 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:38.857 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:38.857 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.857 [205/268] Linking static target lib/librte_cryptodev.a 00:02:38.857 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.857 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.857 [208/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.857 [209/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.857 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.857 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.857 [212/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.857 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.857 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:38.857 [215/268] Linking static target drivers/librte_mempool_ring.a 00:02:39.115 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.115 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.115 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:39.115 [219/268] Linking static target lib/librte_ethdev.a 00:02:39.115 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.115 [221/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.115 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.373 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.373 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:39.373 [225/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.373 [226/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.631 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.564 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:40.564 [229/268] Linking static target lib/librte_vhost.a 00:02:40.564 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.458 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.719 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.719 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.719 [234/268] Linking target lib/librte_eal.so.24.1 00:02:47.719 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.719 [236/268] Linking target lib/librte_timer.so.24.1 00:02:47.719 [237/268] Linking target lib/librte_ring.so.24.1 00:02:47.719 [238/268] Linking target lib/librte_meter.so.24.1 00:02:47.719 [239/268] Linking target lib/librte_pci.so.24.1 00:02:47.719 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:47.719 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.976 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.976 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.976 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.976 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.976 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.976 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:47.976 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:47.976 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:48.234 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:48.234 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:48.234 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:48.234 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:48.234 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:48.234 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:48.234 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:48.234 [257/268] Linking target lib/librte_net.so.24.1 00:02:48.234 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:48.491 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:48.491 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:48.491 [261/268] Linking target lib/librte_security.so.24.1 00:02:48.491 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:48.491 [263/268] Linking target lib/librte_hash.so.24.1 00:02:48.491 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:48.747 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.747 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.747 [267/268] Linking target lib/librte_power.so.24.1 00:02:48.747 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:48.747 INFO: autodetecting backend as ninja 00:02:48.747 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:49.676 CC lib/log/log_flags.o 00:02:49.676 CC lib/log/log.o 00:02:49.676 CC lib/log/log_deprecated.o 00:02:49.676 CC lib/ut_mock/mock.o 00:02:49.676 CC lib/ut/ut.o 00:02:49.676 LIB libspdk_log.a 00:02:49.932 LIB libspdk_ut_mock.a 00:02:49.932 SO libspdk_log.so.7.0 00:02:49.932 LIB libspdk_ut.a 00:02:49.932 SO libspdk_ut_mock.so.6.0 00:02:49.932 SO libspdk_ut.so.2.0 00:02:49.932 SYMLINK libspdk_log.so 00:02:49.932 SYMLINK libspdk_ut_mock.so 00:02:49.932 SYMLINK libspdk_ut.so 00:02:50.189 CC lib/dma/dma.o 00:02:50.189 CC lib/ioat/ioat.o 00:02:50.189 CC lib/util/base64.o 00:02:50.189 CC lib/util/bit_array.o 00:02:50.189 CC lib/util/crc16.o 00:02:50.189 CC lib/util/cpuset.o 00:02:50.189 CC lib/util/crc32.o 00:02:50.189 CC lib/util/crc32c.o 00:02:50.189 CC lib/util/crc32_ieee.o 00:02:50.189 CC lib/util/crc64.o 00:02:50.189 CC lib/util/dif.o 00:02:50.189 CC lib/util/fd.o 00:02:50.189 CC lib/util/file.o 00:02:50.189 CC lib/util/hexlify.o 00:02:50.189 CC lib/util/math.o 00:02:50.189 CC lib/util/iov.o 00:02:50.189 CC lib/util/pipe.o 00:02:50.189 CC lib/util/string.o 00:02:50.189 CC lib/util/strerror_tls.o 00:02:50.189 CC lib/util/fd_group.o 00:02:50.189 CC lib/util/uuid.o 00:02:50.189 CC lib/util/zipf.o 00:02:50.189 CC lib/util/xor.o 00:02:50.189 CXX lib/trace_parser/trace.o 00:02:50.445 CC lib/vfio_user/host/vfio_user.o 00:02:50.445 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.445 LIB libspdk_dma.a 00:02:50.445 SO libspdk_dma.so.4.0 00:02:50.445 LIB libspdk_ioat.a 00:02:50.445 SO libspdk_ioat.so.7.0 00:02:50.445 SYMLINK libspdk_dma.so 00:02:50.445 SYMLINK libspdk_ioat.so 00:02:50.445 LIB libspdk_vfio_user.a 00:02:50.702 SO libspdk_vfio_user.so.5.0 00:02:50.702 LIB libspdk_util.a 00:02:50.702 SYMLINK libspdk_vfio_user.so 00:02:50.702 SO libspdk_util.so.9.1 00:02:50.702 SYMLINK libspdk_util.so 00:02:50.960 LIB libspdk_trace_parser.a 00:02:50.960 SO libspdk_trace_parser.so.5.0 00:02:50.960 SYMLINK libspdk_trace_parser.so 00:02:50.960 CC lib/conf/conf.o 00:02:50.960 CC lib/rdma_utils/rdma_utils.o 00:02:50.960 CC lib/json/json_parse.o 00:02:50.960 CC lib/json/json_util.o 00:02:51.218 CC lib/json/json_write.o 00:02:51.218 CC lib/rdma_provider/common.o 00:02:51.218 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:51.218 CC lib/vmd/led.o 00:02:51.218 CC lib/vmd/vmd.o 00:02:51.219 CC lib/env_dpdk/env.o 00:02:51.219 CC lib/env_dpdk/memory.o 00:02:51.219 CC lib/env_dpdk/init.o 00:02:51.219 CC lib/env_dpdk/pci.o 00:02:51.219 CC lib/env_dpdk/pci_ioat.o 00:02:51.219 CC lib/env_dpdk/threads.o 00:02:51.219 CC lib/idxd/idxd.o 00:02:51.219 CC lib/idxd/idxd_user.o 00:02:51.219 CC lib/env_dpdk/pci_vmd.o 00:02:51.219 CC lib/env_dpdk/pci_virtio.o 00:02:51.219 CC lib/idxd/idxd_kernel.o 00:02:51.219 CC lib/env_dpdk/pci_idxd.o 00:02:51.219 CC lib/env_dpdk/pci_event.o 00:02:51.219 CC lib/env_dpdk/pci_dpdk.o 00:02:51.219 CC lib/env_dpdk/sigbus_handler.o 00:02:51.219 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:51.219 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.219 LIB libspdk_conf.a 00:02:51.219 SO libspdk_conf.so.6.0 00:02:51.219 LIB libspdk_rdma_provider.a 00:02:51.219 LIB libspdk_rdma_utils.a 00:02:51.219 SO libspdk_rdma_provider.so.6.0 00:02:51.477 SYMLINK libspdk_conf.so 00:02:51.477 LIB libspdk_json.a 00:02:51.477 SO libspdk_rdma_utils.so.1.0 00:02:51.477 SYMLINK libspdk_rdma_provider.so 00:02:51.477 SO libspdk_json.so.6.0 00:02:51.477 SYMLINK libspdk_rdma_utils.so 00:02:51.477 SYMLINK libspdk_json.so 00:02:51.477 LIB libspdk_idxd.a 00:02:51.477 SO libspdk_idxd.so.12.0 00:02:51.734 LIB libspdk_vmd.a 00:02:51.734 SO libspdk_vmd.so.6.0 00:02:51.734 SYMLINK libspdk_idxd.so 00:02:51.734 SYMLINK libspdk_vmd.so 00:02:51.734 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.734 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.734 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.734 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.992 LIB libspdk_jsonrpc.a 00:02:51.992 SO libspdk_jsonrpc.so.6.0 00:02:51.992 SYMLINK libspdk_jsonrpc.so 00:02:52.250 LIB libspdk_env_dpdk.a 00:02:52.250 SO libspdk_env_dpdk.so.14.1 00:02:52.250 SYMLINK libspdk_env_dpdk.so 00:02:52.250 CC lib/rpc/rpc.o 00:02:52.509 LIB libspdk_rpc.a 00:02:52.509 SO libspdk_rpc.so.6.0 00:02:52.509 SYMLINK libspdk_rpc.so 00:02:52.767 CC lib/trace/trace.o 00:02:52.767 CC lib/trace/trace_flags.o 00:02:52.767 CC lib/trace/trace_rpc.o 00:02:52.767 CC lib/notify/notify.o 00:02:52.767 CC lib/notify/notify_rpc.o 00:02:53.025 CC lib/keyring/keyring.o 00:02:53.025 CC lib/keyring/keyring_rpc.o 00:02:53.025 LIB libspdk_notify.a 00:02:53.025 SO libspdk_notify.so.6.0 00:02:53.025 LIB libspdk_keyring.a 00:02:53.025 LIB libspdk_trace.a 00:02:53.025 SYMLINK libspdk_notify.so 00:02:53.025 SO libspdk_trace.so.10.0 00:02:53.025 SO libspdk_keyring.so.1.0 00:02:53.284 SYMLINK libspdk_trace.so 00:02:53.284 SYMLINK libspdk_keyring.so 00:02:53.543 CC lib/sock/sock.o 00:02:53.543 CC lib/sock/sock_rpc.o 00:02:53.543 CC lib/thread/iobuf.o 00:02:53.543 CC lib/thread/thread.o 00:02:53.800 LIB libspdk_sock.a 00:02:53.800 SO libspdk_sock.so.10.0 00:02:53.800 SYMLINK libspdk_sock.so 00:02:54.058 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.058 CC lib/nvme/nvme_ctrlr.o 00:02:54.058 CC lib/nvme/nvme_ns_cmd.o 00:02:54.058 CC lib/nvme/nvme_fabric.o 00:02:54.058 CC lib/nvme/nvme_ns.o 00:02:54.058 CC lib/nvme/nvme_pcie_common.o 00:02:54.058 CC lib/nvme/nvme.o 00:02:54.058 CC lib/nvme/nvme_pcie.o 00:02:54.058 CC lib/nvme/nvme_qpair.o 00:02:54.058 CC lib/nvme/nvme_quirks.o 00:02:54.058 CC lib/nvme/nvme_transport.o 00:02:54.058 CC lib/nvme/nvme_discovery.o 00:02:54.058 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.058 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.058 CC lib/nvme/nvme_tcp.o 00:02:54.058 CC lib/nvme/nvme_opal.o 00:02:54.058 CC lib/nvme/nvme_io_msg.o 00:02:54.058 CC lib/nvme/nvme_poll_group.o 00:02:54.058 CC lib/nvme/nvme_zns.o 00:02:54.058 CC lib/nvme/nvme_stubs.o 00:02:54.058 CC lib/nvme/nvme_auth.o 00:02:54.058 CC lib/nvme/nvme_cuse.o 00:02:54.058 CC lib/nvme/nvme_rdma.o 00:02:54.620 LIB libspdk_thread.a 00:02:54.620 SO libspdk_thread.so.10.1 00:02:54.620 SYMLINK libspdk_thread.so 00:02:54.877 CC lib/init/json_config.o 00:02:54.877 CC lib/virtio/virtio.o 00:02:54.877 CC lib/init/subsystem.o 00:02:54.877 CC lib/virtio/virtio_vfio_user.o 00:02:54.877 CC lib/init/subsystem_rpc.o 00:02:54.877 CC lib/accel/accel.o 00:02:54.877 CC lib/virtio/virtio_vhost_user.o 00:02:54.877 CC lib/init/rpc.o 00:02:54.877 CC lib/accel/accel_sw.o 00:02:54.877 CC lib/accel/accel_rpc.o 00:02:54.877 CC lib/virtio/virtio_pci.o 00:02:54.877 CC lib/blob/blobstore.o 00:02:54.877 CC lib/blob/request.o 00:02:54.877 CC lib/blob/zeroes.o 00:02:54.877 CC lib/blob/blob_bs_dev.o 00:02:55.134 LIB libspdk_init.a 00:02:55.134 SO libspdk_init.so.5.0 00:02:55.134 SYMLINK libspdk_init.so 00:02:55.134 LIB libspdk_virtio.a 00:02:55.392 SO libspdk_virtio.so.7.0 00:02:55.392 SYMLINK libspdk_virtio.so 00:02:55.392 CC lib/event/app.o 00:02:55.392 CC lib/event/reactor.o 00:02:55.392 CC lib/event/log_rpc.o 00:02:55.392 CC lib/event/app_rpc.o 00:02:55.649 CC lib/event/scheduler_static.o 00:02:55.649 LIB libspdk_accel.a 00:02:55.649 SO libspdk_accel.so.15.1 00:02:55.649 SYMLINK libspdk_accel.so 00:02:55.649 LIB libspdk_nvme.a 00:02:55.906 LIB libspdk_event.a 00:02:55.906 SO libspdk_nvme.so.13.1 00:02:55.906 SO libspdk_event.so.14.0 00:02:55.906 SYMLINK libspdk_event.so 00:02:55.906 CC lib/bdev/bdev.o 00:02:55.906 CC lib/bdev/part.o 00:02:55.906 CC lib/bdev/bdev_rpc.o 00:02:55.906 CC lib/bdev/bdev_zone.o 00:02:55.906 CC lib/bdev/scsi_nvme.o 00:02:56.165 SYMLINK libspdk_nvme.so 00:02:57.176 LIB libspdk_blob.a 00:02:57.176 SO libspdk_blob.so.11.0 00:02:57.176 SYMLINK libspdk_blob.so 00:02:57.462 CC lib/blobfs/blobfs.o 00:02:57.462 CC lib/blobfs/tree.o 00:02:57.462 CC lib/lvol/lvol.o 00:02:57.727 LIB libspdk_bdev.a 00:02:57.727 SO libspdk_bdev.so.15.1 00:02:57.985 SYMLINK libspdk_bdev.so 00:02:57.985 LIB libspdk_blobfs.a 00:02:57.985 SO libspdk_blobfs.so.10.0 00:02:57.985 LIB libspdk_lvol.a 00:02:57.985 SO libspdk_lvol.so.10.0 00:02:57.985 SYMLINK libspdk_blobfs.so 00:02:57.985 SYMLINK libspdk_lvol.so 00:02:58.250 CC lib/ublk/ublk.o 00:02:58.250 CC lib/ublk/ublk_rpc.o 00:02:58.250 CC lib/ftl/ftl_core.o 00:02:58.250 CC lib/nbd/nbd.o 00:02:58.250 CC lib/ftl/ftl_init.o 00:02:58.250 CC lib/nbd/nbd_rpc.o 00:02:58.250 CC lib/ftl/ftl_layout.o 00:02:58.250 CC lib/nvmf/ctrlr.o 00:02:58.250 CC lib/nvmf/ctrlr_discovery.o 00:02:58.250 CC lib/nvmf/subsystem.o 00:02:58.250 CC lib/ftl/ftl_debug.o 00:02:58.250 CC lib/ftl/ftl_io.o 00:02:58.250 CC lib/nvmf/ctrlr_bdev.o 00:02:58.250 CC lib/ftl/ftl_sb.o 00:02:58.250 CC lib/nvmf/nvmf.o 00:02:58.250 CC lib/ftl/ftl_l2p.o 00:02:58.250 CC lib/nvmf/nvmf_rpc.o 00:02:58.250 CC lib/nvmf/transport.o 00:02:58.250 CC lib/ftl/ftl_l2p_flat.o 00:02:58.250 CC lib/nvmf/tcp.o 00:02:58.250 CC lib/scsi/dev.o 00:02:58.250 CC lib/ftl/ftl_nv_cache.o 00:02:58.250 CC lib/scsi/port.o 00:02:58.250 CC lib/nvmf/stubs.o 00:02:58.250 CC lib/scsi/lun.o 00:02:58.250 CC lib/ftl/ftl_band.o 00:02:58.250 CC lib/nvmf/mdns_server.o 00:02:58.250 CC lib/scsi/scsi.o 00:02:58.250 CC lib/nvmf/rdma.o 00:02:58.250 CC lib/ftl/ftl_band_ops.o 00:02:58.250 CC lib/ftl/ftl_writer.o 00:02:58.250 CC lib/scsi/scsi_bdev.o 00:02:58.250 CC lib/nvmf/auth.o 00:02:58.250 CC lib/ftl/ftl_rq.o 00:02:58.250 CC lib/ftl/ftl_reloc.o 00:02:58.250 CC lib/scsi/scsi_pr.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.250 CC lib/ftl/ftl_l2p_cache.o 00:02:58.250 CC lib/scsi/scsi_rpc.o 00:02:58.250 CC lib/scsi/task.o 00:02:58.250 CC lib/ftl/ftl_p2l.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.250 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.250 CC lib/ftl/utils/ftl_conf.o 00:02:58.250 CC lib/ftl/utils/ftl_md.o 00:02:58.250 CC lib/ftl/utils/ftl_mempool.o 00:02:58.250 CC lib/ftl/utils/ftl_property.o 00:02:58.250 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.250 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.250 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.250 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.250 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.250 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:58.250 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:58.250 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:58.250 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:58.250 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:58.250 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:58.250 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:58.250 CC lib/ftl/base/ftl_base_dev.o 00:02:58.250 CC lib/ftl/ftl_trace.o 00:02:58.250 CC lib/ftl/base/ftl_base_bdev.o 00:02:58.814 LIB libspdk_nbd.a 00:02:58.814 LIB libspdk_scsi.a 00:02:58.814 SO libspdk_nbd.so.7.0 00:02:58.814 SO libspdk_scsi.so.9.0 00:02:58.814 SYMLINK libspdk_nbd.so 00:02:58.814 LIB libspdk_ublk.a 00:02:58.814 SYMLINK libspdk_scsi.so 00:02:58.814 SO libspdk_ublk.so.3.0 00:02:58.814 SYMLINK libspdk_ublk.so 00:02:59.072 CC lib/vhost/vhost.o 00:02:59.072 CC lib/vhost/vhost_rpc.o 00:02:59.072 CC lib/iscsi/conn.o 00:02:59.072 CC lib/vhost/vhost_scsi.o 00:02:59.072 CC lib/vhost/rte_vhost_user.o 00:02:59.072 CC lib/iscsi/init_grp.o 00:02:59.072 CC lib/vhost/vhost_blk.o 00:02:59.072 CC lib/iscsi/iscsi.o 00:02:59.072 CC lib/iscsi/md5.o 00:02:59.072 CC lib/iscsi/param.o 00:02:59.072 CC lib/iscsi/portal_grp.o 00:02:59.072 CC lib/iscsi/tgt_node.o 00:02:59.072 CC lib/iscsi/iscsi_subsystem.o 00:02:59.072 CC lib/iscsi/task.o 00:02:59.072 CC lib/iscsi/iscsi_rpc.o 00:02:59.072 LIB libspdk_ftl.a 00:02:59.329 SO libspdk_ftl.so.9.0 00:02:59.587 SYMLINK libspdk_ftl.so 00:02:59.844 LIB libspdk_nvmf.a 00:02:59.844 SO libspdk_nvmf.so.19.0 00:02:59.844 LIB libspdk_vhost.a 00:02:59.844 SO libspdk_vhost.so.8.0 00:03:00.102 SYMLINK libspdk_vhost.so 00:03:00.102 SYMLINK libspdk_nvmf.so 00:03:00.102 LIB libspdk_iscsi.a 00:03:00.102 SO libspdk_iscsi.so.8.0 00:03:00.360 SYMLINK libspdk_iscsi.so 00:03:00.618 CC module/env_dpdk/env_dpdk_rpc.o 00:03:00.875 CC module/keyring/linux/keyring.o 00:03:00.875 CC module/keyring/linux/keyring_rpc.o 00:03:00.875 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:00.875 LIB libspdk_env_dpdk_rpc.a 00:03:00.875 CC module/scheduler/gscheduler/gscheduler.o 00:03:00.875 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:00.875 CC module/keyring/file/keyring.o 00:03:00.875 CC module/keyring/file/keyring_rpc.o 00:03:00.875 CC module/accel/error/accel_error.o 00:03:00.875 CC module/accel/error/accel_error_rpc.o 00:03:00.875 CC module/accel/iaa/accel_iaa.o 00:03:00.875 CC module/accel/iaa/accel_iaa_rpc.o 00:03:00.875 CC module/accel/ioat/accel_ioat.o 00:03:00.875 CC module/accel/ioat/accel_ioat_rpc.o 00:03:00.875 CC module/sock/posix/posix.o 00:03:00.875 CC module/blob/bdev/blob_bdev.o 00:03:00.875 CC module/accel/dsa/accel_dsa.o 00:03:00.875 CC module/accel/dsa/accel_dsa_rpc.o 00:03:00.875 SO libspdk_env_dpdk_rpc.so.6.0 00:03:00.875 SYMLINK libspdk_env_dpdk_rpc.so 00:03:00.875 LIB libspdk_keyring_linux.a 00:03:00.875 SO libspdk_keyring_linux.so.1.0 00:03:00.875 LIB libspdk_keyring_file.a 00:03:00.875 LIB libspdk_scheduler_dynamic.a 00:03:00.875 LIB libspdk_scheduler_gscheduler.a 00:03:00.875 LIB libspdk_scheduler_dpdk_governor.a 00:03:00.875 SO libspdk_scheduler_dynamic.so.4.0 00:03:00.875 LIB libspdk_accel_error.a 00:03:00.875 SO libspdk_scheduler_gscheduler.so.4.0 00:03:00.875 SO libspdk_keyring_file.so.1.0 00:03:01.132 SYMLINK libspdk_keyring_linux.so 00:03:01.132 LIB libspdk_accel_ioat.a 00:03:01.132 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:01.132 LIB libspdk_accel_iaa.a 00:03:01.132 SO libspdk_accel_error.so.2.0 00:03:01.132 SYMLINK libspdk_scheduler_dynamic.so 00:03:01.132 SO libspdk_accel_ioat.so.6.0 00:03:01.132 SYMLINK libspdk_scheduler_gscheduler.so 00:03:01.132 SYMLINK libspdk_keyring_file.so 00:03:01.132 SO libspdk_accel_iaa.so.3.0 00:03:01.132 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:01.132 LIB libspdk_blob_bdev.a 00:03:01.133 LIB libspdk_accel_dsa.a 00:03:01.133 SYMLINK libspdk_accel_error.so 00:03:01.133 SO libspdk_accel_dsa.so.5.0 00:03:01.133 SO libspdk_blob_bdev.so.11.0 00:03:01.133 SYMLINK libspdk_accel_ioat.so 00:03:01.133 SYMLINK libspdk_accel_iaa.so 00:03:01.133 SYMLINK libspdk_blob_bdev.so 00:03:01.133 SYMLINK libspdk_accel_dsa.so 00:03:01.390 LIB libspdk_sock_posix.a 00:03:01.390 SO libspdk_sock_posix.so.6.0 00:03:01.647 CC module/bdev/lvol/vbdev_lvol.o 00:03:01.647 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:01.647 CC module/bdev/error/vbdev_error.o 00:03:01.647 CC module/bdev/error/vbdev_error_rpc.o 00:03:01.647 SYMLINK libspdk_sock_posix.so 00:03:01.647 CC module/bdev/raid/bdev_raid.o 00:03:01.647 CC module/bdev/raid/bdev_raid_rpc.o 00:03:01.647 CC module/bdev/raid/bdev_raid_sb.o 00:03:01.647 CC module/bdev/raid/raid1.o 00:03:01.647 CC module/bdev/raid/raid0.o 00:03:01.647 CC module/bdev/raid/concat.o 00:03:01.647 CC module/bdev/aio/bdev_aio.o 00:03:01.647 CC module/bdev/aio/bdev_aio_rpc.o 00:03:01.647 CC module/bdev/passthru/vbdev_passthru.o 00:03:01.647 CC module/bdev/nvme/bdev_nvme.o 00:03:01.647 CC module/blobfs/bdev/blobfs_bdev.o 00:03:01.647 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:01.647 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:01.647 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:01.647 CC module/bdev/delay/vbdev_delay.o 00:03:01.647 CC module/bdev/nvme/nvme_rpc.o 00:03:01.647 CC module/bdev/gpt/gpt.o 00:03:01.647 CC module/bdev/nvme/bdev_mdns_client.o 00:03:01.647 CC module/bdev/nvme/vbdev_opal.o 00:03:01.647 CC module/bdev/gpt/vbdev_gpt.o 00:03:01.647 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:01.647 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:01.647 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:01.647 CC module/bdev/malloc/bdev_malloc.o 00:03:01.647 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:01.647 CC module/bdev/split/vbdev_split.o 00:03:01.647 CC module/bdev/ftl/bdev_ftl.o 00:03:01.647 CC module/bdev/split/vbdev_split_rpc.o 00:03:01.647 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:01.647 CC module/bdev/iscsi/bdev_iscsi.o 00:03:01.647 CC module/bdev/null/bdev_null.o 00:03:01.647 CC module/bdev/null/bdev_null_rpc.o 00:03:01.647 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:01.647 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:01.647 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:01.647 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:01.647 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:01.647 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:01.904 LIB libspdk_blobfs_bdev.a 00:03:01.904 LIB libspdk_bdev_error.a 00:03:01.904 SO libspdk_blobfs_bdev.so.6.0 00:03:01.904 SO libspdk_bdev_error.so.6.0 00:03:01.904 LIB libspdk_bdev_null.a 00:03:01.904 LIB libspdk_bdev_split.a 00:03:01.904 LIB libspdk_bdev_passthru.a 00:03:01.904 LIB libspdk_bdev_gpt.a 00:03:01.904 LIB libspdk_bdev_ftl.a 00:03:01.904 SYMLINK libspdk_blobfs_bdev.so 00:03:01.904 SO libspdk_bdev_null.so.6.0 00:03:01.904 SO libspdk_bdev_passthru.so.6.0 00:03:01.904 SO libspdk_bdev_split.so.6.0 00:03:01.904 LIB libspdk_bdev_aio.a 00:03:01.904 SO libspdk_bdev_gpt.so.6.0 00:03:01.904 SYMLINK libspdk_bdev_error.so 00:03:01.904 SO libspdk_bdev_ftl.so.6.0 00:03:01.904 LIB libspdk_bdev_delay.a 00:03:01.904 LIB libspdk_bdev_zone_block.a 00:03:01.904 SO libspdk_bdev_aio.so.6.0 00:03:01.904 SYMLINK libspdk_bdev_null.so 00:03:01.904 LIB libspdk_bdev_iscsi.a 00:03:01.904 SYMLINK libspdk_bdev_passthru.so 00:03:01.904 SO libspdk_bdev_delay.so.6.0 00:03:01.904 SYMLINK libspdk_bdev_split.so 00:03:01.904 SYMLINK libspdk_bdev_gpt.so 00:03:01.904 LIB libspdk_bdev_malloc.a 00:03:01.904 SYMLINK libspdk_bdev_ftl.so 00:03:01.904 SO libspdk_bdev_iscsi.so.6.0 00:03:01.904 SO libspdk_bdev_zone_block.so.6.0 00:03:02.162 LIB libspdk_bdev_lvol.a 00:03:02.162 SYMLINK libspdk_bdev_aio.so 00:03:02.162 SO libspdk_bdev_malloc.so.6.0 00:03:02.162 SYMLINK libspdk_bdev_delay.so 00:03:02.162 SO libspdk_bdev_lvol.so.6.0 00:03:02.162 SYMLINK libspdk_bdev_zone_block.so 00:03:02.162 SYMLINK libspdk_bdev_iscsi.so 00:03:02.162 LIB libspdk_bdev_virtio.a 00:03:02.162 SYMLINK libspdk_bdev_malloc.so 00:03:02.162 SO libspdk_bdev_virtio.so.6.0 00:03:02.162 SYMLINK libspdk_bdev_lvol.so 00:03:02.162 SYMLINK libspdk_bdev_virtio.so 00:03:02.420 LIB libspdk_bdev_raid.a 00:03:02.420 SO libspdk_bdev_raid.so.6.0 00:03:02.420 SYMLINK libspdk_bdev_raid.so 00:03:03.380 LIB libspdk_bdev_nvme.a 00:03:03.380 SO libspdk_bdev_nvme.so.7.0 00:03:03.380 SYMLINK libspdk_bdev_nvme.so 00:03:03.943 CC module/event/subsystems/keyring/keyring.o 00:03:03.943 CC module/event/subsystems/scheduler/scheduler.o 00:03:03.943 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:03.943 CC module/event/subsystems/vmd/vmd.o 00:03:03.943 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:03.943 CC module/event/subsystems/iobuf/iobuf.o 00:03:03.943 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:03.943 CC module/event/subsystems/sock/sock.o 00:03:03.943 LIB libspdk_event_scheduler.a 00:03:03.943 LIB libspdk_event_keyring.a 00:03:03.943 LIB libspdk_event_vhost_blk.a 00:03:03.943 SO libspdk_event_scheduler.so.4.0 00:03:03.943 SO libspdk_event_keyring.so.1.0 00:03:03.943 LIB libspdk_event_sock.a 00:03:03.943 LIB libspdk_event_vmd.a 00:03:03.943 LIB libspdk_event_iobuf.a 00:03:03.943 SO libspdk_event_vhost_blk.so.3.0 00:03:04.200 SO libspdk_event_sock.so.5.0 00:03:04.200 SO libspdk_event_vmd.so.6.0 00:03:04.200 SO libspdk_event_iobuf.so.3.0 00:03:04.200 SYMLINK libspdk_event_keyring.so 00:03:04.200 SYMLINK libspdk_event_scheduler.so 00:03:04.200 SYMLINK libspdk_event_vhost_blk.so 00:03:04.200 SYMLINK libspdk_event_sock.so 00:03:04.200 SYMLINK libspdk_event_vmd.so 00:03:04.200 SYMLINK libspdk_event_iobuf.so 00:03:04.456 CC module/event/subsystems/accel/accel.o 00:03:04.456 LIB libspdk_event_accel.a 00:03:04.713 SO libspdk_event_accel.so.6.0 00:03:04.713 SYMLINK libspdk_event_accel.so 00:03:04.969 CC module/event/subsystems/bdev/bdev.o 00:03:04.969 LIB libspdk_event_bdev.a 00:03:05.226 SO libspdk_event_bdev.so.6.0 00:03:05.226 SYMLINK libspdk_event_bdev.so 00:03:05.483 CC module/event/subsystems/scsi/scsi.o 00:03:05.483 CC module/event/subsystems/nbd/nbd.o 00:03:05.483 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:05.483 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:05.483 CC module/event/subsystems/ublk/ublk.o 00:03:05.483 LIB libspdk_event_nbd.a 00:03:05.483 LIB libspdk_event_scsi.a 00:03:05.483 LIB libspdk_event_ublk.a 00:03:05.741 SO libspdk_event_nbd.so.6.0 00:03:05.741 SO libspdk_event_scsi.so.6.0 00:03:05.741 SO libspdk_event_ublk.so.3.0 00:03:05.741 LIB libspdk_event_nvmf.a 00:03:05.741 SYMLINK libspdk_event_nbd.so 00:03:05.741 SYMLINK libspdk_event_scsi.so 00:03:05.741 SO libspdk_event_nvmf.so.6.0 00:03:05.741 SYMLINK libspdk_event_ublk.so 00:03:05.741 SYMLINK libspdk_event_nvmf.so 00:03:05.997 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:05.997 CC module/event/subsystems/iscsi/iscsi.o 00:03:05.997 LIB libspdk_event_vhost_scsi.a 00:03:06.254 SO libspdk_event_vhost_scsi.so.3.0 00:03:06.254 LIB libspdk_event_iscsi.a 00:03:06.254 SO libspdk_event_iscsi.so.6.0 00:03:06.254 SYMLINK libspdk_event_vhost_scsi.so 00:03:06.254 SYMLINK libspdk_event_iscsi.so 00:03:06.511 SO libspdk.so.6.0 00:03:06.511 SYMLINK libspdk.so 00:03:06.776 CC app/trace_record/trace_record.o 00:03:06.776 CC app/spdk_nvme_perf/perf.o 00:03:06.776 TEST_HEADER include/spdk/accel.h 00:03:06.777 TEST_HEADER include/spdk/accel_module.h 00:03:06.777 TEST_HEADER include/spdk/assert.h 00:03:06.777 TEST_HEADER include/spdk/barrier.h 00:03:06.777 TEST_HEADER include/spdk/bdev.h 00:03:06.777 TEST_HEADER include/spdk/base64.h 00:03:06.777 CC app/spdk_top/spdk_top.o 00:03:06.777 TEST_HEADER include/spdk/bdev_zone.h 00:03:06.777 TEST_HEADER include/spdk/bdev_module.h 00:03:06.777 TEST_HEADER include/spdk/bit_array.h 00:03:06.777 TEST_HEADER include/spdk/bit_pool.h 00:03:06.777 CC test/rpc_client/rpc_client_test.o 00:03:06.777 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:06.777 TEST_HEADER include/spdk/blob_bdev.h 00:03:06.777 TEST_HEADER include/spdk/blobfs.h 00:03:06.777 CC app/spdk_nvme_discover/discovery_aer.o 00:03:06.777 TEST_HEADER include/spdk/conf.h 00:03:06.777 TEST_HEADER include/spdk/blob.h 00:03:06.777 CC app/spdk_nvme_identify/identify.o 00:03:06.777 TEST_HEADER include/spdk/config.h 00:03:06.777 TEST_HEADER include/spdk/cpuset.h 00:03:06.777 TEST_HEADER include/spdk/crc16.h 00:03:06.777 CXX app/trace/trace.o 00:03:06.777 TEST_HEADER include/spdk/crc32.h 00:03:06.777 TEST_HEADER include/spdk/dif.h 00:03:06.777 TEST_HEADER include/spdk/crc64.h 00:03:06.777 TEST_HEADER include/spdk/dma.h 00:03:06.777 TEST_HEADER include/spdk/endian.h 00:03:06.777 TEST_HEADER include/spdk/env.h 00:03:06.777 TEST_HEADER include/spdk/event.h 00:03:06.777 TEST_HEADER include/spdk/env_dpdk.h 00:03:06.777 TEST_HEADER include/spdk/fd_group.h 00:03:06.777 TEST_HEADER include/spdk/fd.h 00:03:06.777 TEST_HEADER include/spdk/file.h 00:03:06.777 TEST_HEADER include/spdk/ftl.h 00:03:06.777 TEST_HEADER include/spdk/gpt_spec.h 00:03:06.777 TEST_HEADER include/spdk/histogram_data.h 00:03:06.777 CC app/spdk_lspci/spdk_lspci.o 00:03:06.777 TEST_HEADER include/spdk/hexlify.h 00:03:06.777 TEST_HEADER include/spdk/idxd_spec.h 00:03:06.777 TEST_HEADER include/spdk/idxd.h 00:03:06.777 TEST_HEADER include/spdk/init.h 00:03:06.777 TEST_HEADER include/spdk/ioat.h 00:03:06.777 TEST_HEADER include/spdk/ioat_spec.h 00:03:06.777 TEST_HEADER include/spdk/iscsi_spec.h 00:03:06.777 TEST_HEADER include/spdk/json.h 00:03:06.777 TEST_HEADER include/spdk/keyring_module.h 00:03:06.777 TEST_HEADER include/spdk/keyring.h 00:03:06.777 TEST_HEADER include/spdk/jsonrpc.h 00:03:06.777 TEST_HEADER include/spdk/lvol.h 00:03:06.777 TEST_HEADER include/spdk/likely.h 00:03:06.777 TEST_HEADER include/spdk/log.h 00:03:06.777 TEST_HEADER include/spdk/mmio.h 00:03:06.777 TEST_HEADER include/spdk/memory.h 00:03:06.777 TEST_HEADER include/spdk/nbd.h 00:03:06.777 TEST_HEADER include/spdk/notify.h 00:03:06.777 TEST_HEADER include/spdk/nvme.h 00:03:06.777 TEST_HEADER include/spdk/nvme_intel.h 00:03:06.777 TEST_HEADER include/spdk/nvme_spec.h 00:03:06.777 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:06.777 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:06.777 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:06.777 TEST_HEADER include/spdk/nvmf.h 00:03:06.777 TEST_HEADER include/spdk/nvme_zns.h 00:03:06.777 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:06.777 TEST_HEADER include/spdk/nvmf_spec.h 00:03:06.777 TEST_HEADER include/spdk/opal.h 00:03:06.777 TEST_HEADER include/spdk/pci_ids.h 00:03:06.777 TEST_HEADER include/spdk/nvmf_transport.h 00:03:06.777 TEST_HEADER include/spdk/opal_spec.h 00:03:06.777 TEST_HEADER include/spdk/reduce.h 00:03:06.777 TEST_HEADER include/spdk/queue.h 00:03:06.777 TEST_HEADER include/spdk/rpc.h 00:03:06.777 TEST_HEADER include/spdk/pipe.h 00:03:06.777 TEST_HEADER include/spdk/scsi.h 00:03:06.777 TEST_HEADER include/spdk/scsi_spec.h 00:03:06.777 TEST_HEADER include/spdk/scheduler.h 00:03:06.777 TEST_HEADER include/spdk/stdinc.h 00:03:06.777 TEST_HEADER include/spdk/sock.h 00:03:06.777 TEST_HEADER include/spdk/thread.h 00:03:06.777 TEST_HEADER include/spdk/trace.h 00:03:06.777 TEST_HEADER include/spdk/string.h 00:03:06.777 CC app/iscsi_tgt/iscsi_tgt.o 00:03:06.777 TEST_HEADER include/spdk/ublk.h 00:03:06.777 TEST_HEADER include/spdk/trace_parser.h 00:03:06.777 TEST_HEADER include/spdk/tree.h 00:03:06.777 TEST_HEADER include/spdk/util.h 00:03:06.777 TEST_HEADER include/spdk/uuid.h 00:03:06.777 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:06.777 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:06.777 TEST_HEADER include/spdk/version.h 00:03:06.777 TEST_HEADER include/spdk/vhost.h 00:03:06.777 TEST_HEADER include/spdk/xor.h 00:03:06.777 TEST_HEADER include/spdk/vmd.h 00:03:06.777 CC app/spdk_dd/spdk_dd.o 00:03:06.777 TEST_HEADER include/spdk/zipf.h 00:03:06.777 CXX test/cpp_headers/accel.o 00:03:06.777 CXX test/cpp_headers/assert.o 00:03:06.777 CXX test/cpp_headers/accel_module.o 00:03:06.777 CXX test/cpp_headers/barrier.o 00:03:06.777 CXX test/cpp_headers/base64.o 00:03:06.777 CXX test/cpp_headers/bdev.o 00:03:06.777 CXX test/cpp_headers/bdev_zone.o 00:03:06.777 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:06.777 CXX test/cpp_headers/bit_array.o 00:03:06.777 CXX test/cpp_headers/bdev_module.o 00:03:06.777 CXX test/cpp_headers/bit_pool.o 00:03:06.777 CXX test/cpp_headers/blob_bdev.o 00:03:06.777 CC app/nvmf_tgt/nvmf_main.o 00:03:06.777 CXX test/cpp_headers/blobfs_bdev.o 00:03:06.777 CXX test/cpp_headers/blob.o 00:03:06.777 CXX test/cpp_headers/blobfs.o 00:03:06.777 CXX test/cpp_headers/conf.o 00:03:06.777 CXX test/cpp_headers/cpuset.o 00:03:06.777 CXX test/cpp_headers/crc16.o 00:03:06.777 CXX test/cpp_headers/config.o 00:03:06.777 CC app/spdk_tgt/spdk_tgt.o 00:03:06.777 CXX test/cpp_headers/crc32.o 00:03:06.777 CXX test/cpp_headers/crc64.o 00:03:06.777 CXX test/cpp_headers/dif.o 00:03:06.777 CXX test/cpp_headers/endian.o 00:03:06.777 CXX test/cpp_headers/dma.o 00:03:06.777 CXX test/cpp_headers/env_dpdk.o 00:03:06.777 CXX test/cpp_headers/fd_group.o 00:03:06.777 CXX test/cpp_headers/fd.o 00:03:06.777 CXX test/cpp_headers/event.o 00:03:06.777 CXX test/cpp_headers/env.o 00:03:06.777 CXX test/cpp_headers/file.o 00:03:06.777 CXX test/cpp_headers/ftl.o 00:03:06.777 CXX test/cpp_headers/hexlify.o 00:03:06.777 CXX test/cpp_headers/histogram_data.o 00:03:06.777 CXX test/cpp_headers/idxd.o 00:03:06.777 CXX test/cpp_headers/gpt_spec.o 00:03:06.777 CXX test/cpp_headers/init.o 00:03:06.777 CXX test/cpp_headers/idxd_spec.o 00:03:06.777 CXX test/cpp_headers/iscsi_spec.o 00:03:06.777 CXX test/cpp_headers/json.o 00:03:06.777 CXX test/cpp_headers/ioat_spec.o 00:03:06.777 CXX test/cpp_headers/ioat.o 00:03:06.777 CXX test/cpp_headers/keyring.o 00:03:06.777 CXX test/cpp_headers/keyring_module.o 00:03:06.777 CXX test/cpp_headers/likely.o 00:03:06.777 CXX test/cpp_headers/log.o 00:03:06.777 CXX test/cpp_headers/jsonrpc.o 00:03:06.777 CXX test/cpp_headers/memory.o 00:03:06.777 CXX test/cpp_headers/mmio.o 00:03:06.777 CXX test/cpp_headers/nbd.o 00:03:06.777 CXX test/cpp_headers/notify.o 00:03:06.777 CXX test/cpp_headers/lvol.o 00:03:06.777 CXX test/cpp_headers/nvme.o 00:03:06.777 CXX test/cpp_headers/nvme_intel.o 00:03:06.777 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:06.777 CXX test/cpp_headers/nvme_ocssd.o 00:03:06.777 CXX test/cpp_headers/nvme_zns.o 00:03:06.777 CXX test/cpp_headers/nvme_spec.o 00:03:06.777 CXX test/cpp_headers/nvmf_cmd.o 00:03:06.777 CXX test/cpp_headers/nvmf.o 00:03:06.777 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:06.777 CXX test/cpp_headers/opal.o 00:03:06.777 CXX test/cpp_headers/nvmf_spec.o 00:03:06.777 CXX test/cpp_headers/nvmf_transport.o 00:03:06.777 CXX test/cpp_headers/pci_ids.o 00:03:06.777 CXX test/cpp_headers/opal_spec.o 00:03:06.777 CXX test/cpp_headers/pipe.o 00:03:06.777 CXX test/cpp_headers/queue.o 00:03:06.777 CXX test/cpp_headers/reduce.o 00:03:06.777 CC examples/util/zipf/zipf.o 00:03:06.777 CC test/thread/poller_perf/poller_perf.o 00:03:07.045 CC test/app/histogram_perf/histogram_perf.o 00:03:07.045 CXX test/cpp_headers/rpc.o 00:03:07.045 CC test/app/jsoncat/jsoncat.o 00:03:07.045 CC test/app/stub/stub.o 00:03:07.045 CC test/dma/test_dma/test_dma.o 00:03:07.045 CXX test/cpp_headers/scheduler.o 00:03:07.045 CC examples/ioat/verify/verify.o 00:03:07.045 CC test/env/memory/memory_ut.o 00:03:07.045 CC app/fio/nvme/fio_plugin.o 00:03:07.045 CC test/env/pci/pci_ut.o 00:03:07.045 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:07.045 CC examples/ioat/perf/perf.o 00:03:07.045 CC test/env/vtophys/vtophys.o 00:03:07.045 CC app/fio/bdev/fio_plugin.o 00:03:07.045 CC test/app/bdev_svc/bdev_svc.o 00:03:07.306 LINK rpc_client_test 00:03:07.306 LINK spdk_nvme_discover 00:03:07.306 LINK spdk_lspci 00:03:07.306 LINK spdk_trace_record 00:03:07.306 CC test/env/mem_callbacks/mem_callbacks.o 00:03:07.306 LINK interrupt_tgt 00:03:07.306 LINK histogram_perf 00:03:07.306 LINK poller_perf 00:03:07.306 LINK zipf 00:03:07.306 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:07.306 CXX test/cpp_headers/scsi.o 00:03:07.306 LINK jsoncat 00:03:07.306 CXX test/cpp_headers/scsi_spec.o 00:03:07.306 CXX test/cpp_headers/sock.o 00:03:07.306 CXX test/cpp_headers/stdinc.o 00:03:07.306 CXX test/cpp_headers/string.o 00:03:07.306 CXX test/cpp_headers/thread.o 00:03:07.306 CXX test/cpp_headers/trace.o 00:03:07.306 LINK vtophys 00:03:07.306 CXX test/cpp_headers/trace_parser.o 00:03:07.306 CXX test/cpp_headers/tree.o 00:03:07.306 CXX test/cpp_headers/ublk.o 00:03:07.306 CXX test/cpp_headers/util.o 00:03:07.306 CXX test/cpp_headers/uuid.o 00:03:07.566 CXX test/cpp_headers/version.o 00:03:07.566 CXX test/cpp_headers/vfio_user_pci.o 00:03:07.566 CXX test/cpp_headers/vfio_user_spec.o 00:03:07.566 CXX test/cpp_headers/vhost.o 00:03:07.566 CXX test/cpp_headers/vmd.o 00:03:07.566 CXX test/cpp_headers/zipf.o 00:03:07.566 CXX test/cpp_headers/xor.o 00:03:07.566 LINK nvmf_tgt 00:03:07.566 LINK verify 00:03:07.566 LINK iscsi_tgt 00:03:07.566 LINK ioat_perf 00:03:07.566 LINK spdk_tgt 00:03:07.566 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:07.566 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:07.566 LINK stub 00:03:07.566 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:07.566 LINK env_dpdk_post_init 00:03:07.566 LINK bdev_svc 00:03:07.824 LINK spdk_dd 00:03:07.824 LINK pci_ut 00:03:07.824 LINK test_dma 00:03:07.824 LINK spdk_trace 00:03:07.824 CC test/event/reactor/reactor.o 00:03:07.824 CC test/event/event_perf/event_perf.o 00:03:07.824 CC test/event/reactor_perf/reactor_perf.o 00:03:07.824 CC test/event/app_repeat/app_repeat.o 00:03:07.824 CC examples/idxd/perf/perf.o 00:03:07.824 CC examples/vmd/led/led.o 00:03:07.824 CC examples/vmd/lsvmd/lsvmd.o 00:03:07.824 LINK nvme_fuzz 00:03:07.824 CC examples/sock/hello_world/hello_sock.o 00:03:07.824 CC test/event/scheduler/scheduler.o 00:03:07.824 CC examples/thread/thread/thread_ex.o 00:03:07.824 LINK spdk_bdev 00:03:07.824 LINK spdk_nvme 00:03:08.081 LINK spdk_top 00:03:08.081 LINK reactor 00:03:08.081 LINK event_perf 00:03:08.081 LINK reactor_perf 00:03:08.081 LINK vhost_fuzz 00:03:08.081 LINK lsvmd 00:03:08.081 LINK led 00:03:08.081 LINK app_repeat 00:03:08.081 LINK spdk_nvme_perf 00:03:08.081 LINK spdk_nvme_identify 00:03:08.081 LINK mem_callbacks 00:03:08.081 CC app/vhost/vhost.o 00:03:08.081 LINK hello_sock 00:03:08.081 LINK scheduler 00:03:08.081 LINK thread 00:03:08.081 LINK idxd_perf 00:03:08.081 CC test/nvme/compliance/nvme_compliance.o 00:03:08.081 CC test/nvme/fused_ordering/fused_ordering.o 00:03:08.081 CC test/nvme/simple_copy/simple_copy.o 00:03:08.081 CC test/nvme/boot_partition/boot_partition.o 00:03:08.081 CC test/nvme/reset/reset.o 00:03:08.081 CC test/nvme/reserve/reserve.o 00:03:08.081 CC test/nvme/fdp/fdp.o 00:03:08.081 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:08.081 CC test/nvme/sgl/sgl.o 00:03:08.081 CC test/nvme/err_injection/err_injection.o 00:03:08.081 CC test/nvme/overhead/overhead.o 00:03:08.081 CC test/nvme/aer/aer.o 00:03:08.081 CC test/nvme/connect_stress/connect_stress.o 00:03:08.339 CC test/nvme/cuse/cuse.o 00:03:08.339 CC test/nvme/e2edp/nvme_dp.o 00:03:08.339 CC test/nvme/startup/startup.o 00:03:08.339 CC test/blobfs/mkfs/mkfs.o 00:03:08.339 CC test/accel/dif/dif.o 00:03:08.339 CC test/lvol/esnap/esnap.o 00:03:08.339 LINK vhost 00:03:08.339 LINK boot_partition 00:03:08.339 LINK startup 00:03:08.339 LINK memory_ut 00:03:08.339 LINK err_injection 00:03:08.339 LINK doorbell_aers 00:03:08.339 LINK connect_stress 00:03:08.339 LINK reserve 00:03:08.339 LINK fused_ordering 00:03:08.339 LINK simple_copy 00:03:08.339 LINK sgl 00:03:08.339 LINK reset 00:03:08.339 LINK mkfs 00:03:08.339 LINK overhead 00:03:08.597 LINK aer 00:03:08.597 LINK nvme_dp 00:03:08.597 LINK nvme_compliance 00:03:08.597 LINK fdp 00:03:08.597 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:08.597 CC examples/nvme/reconnect/reconnect.o 00:03:08.597 CC examples/nvme/hotplug/hotplug.o 00:03:08.597 CC examples/nvme/arbitration/arbitration.o 00:03:08.597 CC examples/nvme/abort/abort.o 00:03:08.597 CC examples/nvme/hello_world/hello_world.o 00:03:08.597 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:08.597 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:08.597 CC examples/accel/perf/accel_perf.o 00:03:08.597 CC examples/blob/hello_world/hello_blob.o 00:03:08.597 CC examples/blob/cli/blobcli.o 00:03:08.597 LINK dif 00:03:08.597 LINK cmb_copy 00:03:08.597 LINK pmr_persistence 00:03:08.856 LINK hotplug 00:03:08.856 LINK hello_world 00:03:08.856 LINK reconnect 00:03:08.856 LINK arbitration 00:03:08.856 LINK abort 00:03:08.856 LINK hello_blob 00:03:08.856 LINK nvme_manage 00:03:08.856 LINK iscsi_fuzz 00:03:08.856 LINK accel_perf 00:03:09.114 LINK blobcli 00:03:09.114 CC test/bdev/bdevio/bdevio.o 00:03:09.371 LINK cuse 00:03:09.371 CC examples/bdev/hello_world/hello_bdev.o 00:03:09.371 CC examples/bdev/bdevperf/bdevperf.o 00:03:09.371 LINK bdevio 00:03:09.629 LINK hello_bdev 00:03:09.887 LINK bdevperf 00:03:10.453 CC examples/nvmf/nvmf/nvmf.o 00:03:10.712 LINK nvmf 00:03:11.647 LINK esnap 00:03:11.906 00:03:11.906 real 0m43.066s 00:03:11.906 user 6m36.010s 00:03:11.906 sys 3m19.389s 00:03:11.906 23:29:00 make -- common/autotest_common.sh@1118 -- $ xtrace_disable 00:03:11.906 23:29:00 make -- common/autotest_common.sh@10 -- $ set +x 00:03:11.906 ************************************ 00:03:11.906 END TEST make 00:03:11.906 ************************************ 00:03:11.906 23:29:00 -- common/autotest_common.sh@1136 -- $ return 0 00:03:11.906 23:29:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:11.906 23:29:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:11.906 23:29:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:11.906 23:29:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.906 23:29:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:11.906 23:29:00 -- pm/common@44 -- $ pid=1184366 00:03:11.906 23:29:00 -- pm/common@50 -- $ kill -TERM 1184366 00:03:11.906 23:29:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.906 23:29:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:11.906 23:29:00 -- pm/common@44 -- $ pid=1184367 00:03:11.906 23:29:00 -- pm/common@50 -- $ kill -TERM 1184367 00:03:11.906 23:29:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.906 23:29:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:11.906 23:29:00 -- pm/common@44 -- $ pid=1184368 00:03:11.906 23:29:00 -- pm/common@50 -- $ kill -TERM 1184368 00:03:11.906 23:29:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:11.906 23:29:00 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:11.906 23:29:00 -- pm/common@44 -- $ pid=1184392 00:03:11.906 23:29:00 -- pm/common@50 -- $ sudo -E kill -TERM 1184392 00:03:12.165 23:29:00 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:12.165 23:29:00 -- nvmf/common.sh@7 -- # uname -s 00:03:12.165 23:29:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:12.165 23:29:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:12.165 23:29:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:12.165 23:29:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:12.165 23:29:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:12.165 23:29:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:12.165 23:29:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:12.165 23:29:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:12.165 23:29:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:12.165 23:29:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:12.165 23:29:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:03:12.165 23:29:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:03:12.165 23:29:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:12.165 23:29:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:12.165 23:29:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:12.165 23:29:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:12.165 23:29:00 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:12.165 23:29:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:12.165 23:29:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:12.165 23:29:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:12.165 23:29:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.165 23:29:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.165 23:29:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.165 23:29:00 -- paths/export.sh@5 -- # export PATH 00:03:12.165 23:29:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:12.165 23:29:00 -- nvmf/common.sh@47 -- # : 0 00:03:12.165 23:29:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:12.165 23:29:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:12.165 23:29:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:12.165 23:29:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:12.165 23:29:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:12.165 23:29:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:12.165 23:29:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:12.165 23:29:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:12.165 23:29:00 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:12.165 23:29:00 -- spdk/autotest.sh@32 -- # uname -s 00:03:12.165 23:29:00 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:12.165 23:29:00 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:12.165 23:29:00 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:12.165 23:29:00 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:12.165 23:29:00 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:12.165 23:29:00 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:12.165 23:29:00 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:12.165 23:29:00 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:12.165 23:29:00 -- spdk/autotest.sh@48 -- # udevadm_pid=1242992 00:03:12.166 23:29:00 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:12.166 23:29:00 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:12.166 23:29:00 -- pm/common@17 -- # local monitor 00:03:12.166 23:29:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.166 23:29:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.166 23:29:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.166 23:29:00 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:12.166 23:29:00 -- pm/common@21 -- # date +%s 00:03:12.166 23:29:00 -- pm/common@25 -- # sleep 1 00:03:12.166 23:29:00 -- pm/common@21 -- # date +%s 00:03:12.166 23:29:00 -- pm/common@21 -- # date +%s 00:03:12.166 23:29:00 -- pm/common@21 -- # date +%s 00:03:12.166 23:29:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078940 00:03:12.166 23:29:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078940 00:03:12.166 23:29:00 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078940 00:03:12.166 23:29:00 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721078940 00:03:12.166 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078940_collect-cpu-load.pm.log 00:03:12.166 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078940_collect-cpu-temp.pm.log 00:03:12.166 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078940_collect-vmstat.pm.log 00:03:12.166 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721078940_collect-bmc-pm.bmc.pm.log 00:03:13.103 23:29:01 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:13.103 23:29:01 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:13.103 23:29:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:13.103 23:29:01 -- common/autotest_common.sh@10 -- # set +x 00:03:13.103 23:29:01 -- spdk/autotest.sh@59 -- # create_test_list 00:03:13.103 23:29:01 -- common/autotest_common.sh@740 -- # xtrace_disable 00:03:13.103 23:29:01 -- common/autotest_common.sh@10 -- # set +x 00:03:13.103 23:29:02 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:13.103 23:29:02 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.103 23:29:02 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.103 23:29:02 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:13.103 23:29:02 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:13.103 23:29:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:13.103 23:29:02 -- common/autotest_common.sh@1449 -- # uname 00:03:13.103 23:29:02 -- common/autotest_common.sh@1449 -- # '[' Linux = FreeBSD ']' 00:03:13.103 23:29:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:13.103 23:29:02 -- common/autotest_common.sh@1469 -- # uname 00:03:13.103 23:29:02 -- common/autotest_common.sh@1469 -- # [[ Linux = FreeBSD ]] 00:03:13.103 23:29:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:13.103 23:29:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:13.103 23:29:02 -- spdk/autotest.sh@72 -- # hash lcov 00:03:13.103 23:29:02 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:13.103 23:29:02 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:13.103 --rc lcov_branch_coverage=1 00:03:13.103 --rc lcov_function_coverage=1 00:03:13.103 --rc genhtml_branch_coverage=1 00:03:13.103 --rc genhtml_function_coverage=1 00:03:13.103 --rc genhtml_legend=1 00:03:13.103 --rc geninfo_all_blocks=1 00:03:13.103 ' 00:03:13.103 23:29:02 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:13.103 --rc lcov_branch_coverage=1 00:03:13.103 --rc lcov_function_coverage=1 00:03:13.103 --rc genhtml_branch_coverage=1 00:03:13.103 --rc genhtml_function_coverage=1 00:03:13.103 --rc genhtml_legend=1 00:03:13.103 --rc geninfo_all_blocks=1 00:03:13.103 ' 00:03:13.103 23:29:02 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:13.103 --rc lcov_branch_coverage=1 00:03:13.103 --rc lcov_function_coverage=1 00:03:13.103 --rc genhtml_branch_coverage=1 00:03:13.103 --rc genhtml_function_coverage=1 00:03:13.103 --rc genhtml_legend=1 00:03:13.103 --rc geninfo_all_blocks=1 00:03:13.103 --no-external' 00:03:13.103 23:29:02 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:13.103 --rc lcov_branch_coverage=1 00:03:13.103 --rc lcov_function_coverage=1 00:03:13.103 --rc genhtml_branch_coverage=1 00:03:13.103 --rc genhtml_function_coverage=1 00:03:13.103 --rc genhtml_legend=1 00:03:13.103 --rc geninfo_all_blocks=1 00:03:13.103 --no-external' 00:03:13.103 23:29:02 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:13.362 lcov: LCOV version 1.14 00:03:13.362 23:29:02 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:14.737 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:14.737 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:14.738 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:14.738 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:14.996 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:14.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:14.996 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:14.996 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:14.996 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:14.997 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:14.997 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:24.958 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:24.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:37.143 23:29:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:37.143 23:29:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:37.143 23:29:25 -- common/autotest_common.sh@10 -- # set +x 00:03:37.143 23:29:25 -- spdk/autotest.sh@91 -- # rm -f 00:03:37.143 23:29:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.209 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:39.209 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.209 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.209 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.466 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.724 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.724 23:29:28 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:39.724 23:29:28 -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:39.724 23:29:28 -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:39.724 23:29:28 -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:39.724 23:29:28 -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:39.724 23:29:28 -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:39.724 23:29:28 -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:39.724 23:29:28 -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.724 23:29:28 -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:39.724 23:29:28 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:39.724 23:29:28 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.724 23:29:28 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:39.724 23:29:28 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:39.724 23:29:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:39.724 23:29:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.724 No valid GPT data, bailing 00:03:39.724 23:29:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.724 23:29:28 -- scripts/common.sh@391 -- # pt= 00:03:39.724 23:29:28 -- scripts/common.sh@392 -- # return 1 00:03:39.724 23:29:28 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.724 1+0 records in 00:03:39.724 1+0 records out 00:03:39.724 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00141914 s, 739 MB/s 00:03:39.724 23:29:28 -- spdk/autotest.sh@118 -- # sync 00:03:39.724 23:29:28 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.724 23:29:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.724 23:29:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:44.991 23:29:33 -- spdk/autotest.sh@124 -- # uname -s 00:03:44.991 23:29:33 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:44.991 23:29:33 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.991 23:29:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:44.991 23:29:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:44.991 23:29:33 -- common/autotest_common.sh@10 -- # set +x 00:03:44.991 ************************************ 00:03:44.991 START TEST setup.sh 00:03:44.991 ************************************ 00:03:44.991 23:29:33 setup.sh -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:44.991 * Looking for test storage... 00:03:44.991 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:44.991 23:29:33 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:44.991 23:29:33 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:44.991 23:29:33 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:44.991 23:29:33 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:44.991 23:29:33 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:44.991 23:29:33 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.991 ************************************ 00:03:44.991 START TEST acl 00:03:44.991 ************************************ 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:44.991 * Looking for test storage... 00:03:44.991 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1664 -- # local nvme bdf 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.991 23:29:33 setup.sh.acl -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:44.991 23:29:33 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:44.991 23:29:33 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:44.991 23:29:33 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:48.268 23:29:36 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:48.268 23:29:36 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:48.268 23:29:36 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.268 23:29:36 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:48.268 23:29:36 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.268 23:29:36 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:50.799 Hugepages 00:03:50.799 node hugesize free / total 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 00:03:50.799 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.799 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:50.800 23:29:39 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:50.800 23:29:39 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:50.800 23:29:39 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:50.800 23:29:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.800 ************************************ 00:03:50.800 START TEST denied 00:03:50.800 ************************************ 00:03:50.800 23:29:39 setup.sh.acl.denied -- common/autotest_common.sh@1117 -- # denied 00:03:50.800 23:29:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:50.800 23:29:39 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:50.800 23:29:39 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:50.800 23:29:39 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.800 23:29:39 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:54.109 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.109 23:29:42 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:57.392 00:03:57.392 real 0m6.863s 00:03:57.392 user 0m2.183s 00:03:57.392 sys 0m3.995s 00:03:57.392 23:29:46 setup.sh.acl.denied -- common/autotest_common.sh@1118 -- # xtrace_disable 00:03:57.392 23:29:46 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:57.392 ************************************ 00:03:57.392 END TEST denied 00:03:57.392 ************************************ 00:03:57.392 23:29:46 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:03:57.392 23:29:46 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:57.392 23:29:46 setup.sh.acl -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:03:57.392 23:29:46 setup.sh.acl -- common/autotest_common.sh@1099 -- # xtrace_disable 00:03:57.392 23:29:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:57.648 ************************************ 00:03:57.649 START TEST allowed 00:03:57.649 ************************************ 00:03:57.649 23:29:46 setup.sh.acl.allowed -- common/autotest_common.sh@1117 -- # allowed 00:03:57.649 23:29:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:57.649 23:29:46 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:57.649 23:29:46 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:57.649 23:29:46 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.649 23:29:46 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:01.826 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:01.826 23:29:50 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:01.826 23:29:50 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:01.826 23:29:50 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:01.826 23:29:50 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:01.826 23:29:50 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:05.104 00:04:05.104 real 0m7.395s 00:04:05.104 user 0m2.191s 00:04:05.104 sys 0m3.742s 00:04:05.104 23:29:53 setup.sh.acl.allowed -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:05.104 23:29:53 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:05.104 ************************************ 00:04:05.104 END TEST allowed 00:04:05.104 ************************************ 00:04:05.104 23:29:53 setup.sh.acl -- common/autotest_common.sh@1136 -- # return 0 00:04:05.104 00:04:05.104 real 0m20.105s 00:04:05.104 user 0m6.521s 00:04:05.104 sys 0m11.611s 00:04:05.104 23:29:53 setup.sh.acl -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:05.104 23:29:53 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:05.104 ************************************ 00:04:05.104 END TEST acl 00:04:05.104 ************************************ 00:04:05.104 23:29:53 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:05.104 23:29:53 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.104 23:29:53 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:05.104 23:29:53 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:05.104 23:29:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.104 ************************************ 00:04:05.104 START TEST hugepages 00:04:05.104 ************************************ 00:04:05.104 23:29:53 setup.sh.hugepages -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:05.104 * Looking for test storage... 00:04:05.104 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 172858660 kB' 'MemAvailable: 175784528 kB' 'Buffers: 4132 kB' 'Cached: 10213724 kB' 'SwapCached: 0 kB' 'Active: 7310328 kB' 'Inactive: 3521696 kB' 'Active(anon): 6880480 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 617500 kB' 'Mapped: 209824 kB' 'Shmem: 6266312 kB' 'KReclaimable: 237080 kB' 'Slab: 826512 kB' 'SReclaimable: 237080 kB' 'SUnreclaim: 589432 kB' 'KernelStack: 20848 kB' 'PageTables: 9068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 8425184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316252 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.104 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:53 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGEMEM 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGENODE 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v NRHUGE 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@197 -- # get_nodes 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@26 -- # local node 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@198 -- # clear_hp 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:05.105 23:29:54 setup.sh.hugepages -- setup/hugepages.sh@200 -- # run_test single_node_setup single_node_setup 00:04:05.105 23:29:54 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:05.105 23:29:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:05.105 23:29:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.105 ************************************ 00:04:05.105 START TEST single_node_setup 00:04:05.105 ************************************ 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1117 -- # single_node_setup 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@135 -- # get_test_nr_hugepages 2097152 0 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@48 -- # local size=2097152 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@50 -- # shift 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@51 -- # local node_ids 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@61 -- # local user_nodes 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@72 -- # return 0 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # NRHUGE=1024 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # HUGENODE=0 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@136 -- # setup output 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:05.105 23:29:54 setup.sh.hugepages.single_node_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:07.756 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:07.756 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:08.013 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:08.013 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:08.013 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:08.013 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.393 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@137 -- # verify_nr_hugepages 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@88 -- # local node 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@89 -- # local sorted_t 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@90 -- # local sorted_s 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@91 -- # local surp 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@92 -- # local resv 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@93 -- # local anon 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174992244 kB' 'MemAvailable: 177918080 kB' 'Buffers: 4132 kB' 'Cached: 10213832 kB' 'SwapCached: 0 kB' 'Active: 7328816 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898968 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636380 kB' 'Mapped: 209764 kB' 'Shmem: 6266420 kB' 'KReclaimable: 237016 kB' 'Slab: 824952 kB' 'SReclaimable: 237016 kB' 'SUnreclaim: 587936 kB' 'KernelStack: 21248 kB' 'PageTables: 10508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8444760 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316524 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.393 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.394 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@96 -- # anon=0 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175003632 kB' 'MemAvailable: 177929468 kB' 'Buffers: 4132 kB' 'Cached: 10213832 kB' 'SwapCached: 0 kB' 'Active: 7328612 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898764 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635636 kB' 'Mapped: 209764 kB' 'Shmem: 6266420 kB' 'KReclaimable: 237016 kB' 'Slab: 824904 kB' 'SReclaimable: 237016 kB' 'SUnreclaim: 587888 kB' 'KernelStack: 21248 kB' 'PageTables: 10336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8444776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316492 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.395 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.396 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@98 -- # surp=0 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175002280 kB' 'MemAvailable: 177928116 kB' 'Buffers: 4132 kB' 'Cached: 10213852 kB' 'SwapCached: 0 kB' 'Active: 7328736 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898888 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635644 kB' 'Mapped: 209764 kB' 'Shmem: 6266440 kB' 'KReclaimable: 237016 kB' 'Slab: 824924 kB' 'SReclaimable: 237016 kB' 'SUnreclaim: 587908 kB' 'KernelStack: 21248 kB' 'PageTables: 10592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8443308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316428 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.397 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.398 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.659 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@99 -- # resv=0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:09.660 nr_hugepages=1024 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:09.660 resv_hugepages=0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:09.660 surplus_hugepages=0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:09.660 anon_hugepages=0 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node= 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175001128 kB' 'MemAvailable: 177926964 kB' 'Buffers: 4132 kB' 'Cached: 10213872 kB' 'SwapCached: 0 kB' 'Active: 7328788 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898940 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635632 kB' 'Mapped: 209764 kB' 'Shmem: 6266460 kB' 'KReclaimable: 237016 kB' 'Slab: 824892 kB' 'SReclaimable: 237016 kB' 'SUnreclaim: 587876 kB' 'KernelStack: 21168 kB' 'PageTables: 10176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8444824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316444 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.660 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 1024 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@111 -- # get_nodes 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@26 -- # local node 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@18 -- # local node=0 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@19 -- # local var val 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.661 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90608448 kB' 'MemUsed: 7007180 kB' 'SwapCached: 0 kB' 'Active: 3261852 kB' 'Inactive: 199128 kB' 'Active(anon): 3015368 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956248 kB' 'Mapped: 164332 kB' 'AnonPages: 507920 kB' 'Shmem: 2510636 kB' 'KernelStack: 14440 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 391064 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 285108 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.662 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # continue 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # echo 0 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/common.sh@33 -- # return 0 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:09.663 node0=1024 expecting 1024 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:09.663 00:04:09.663 real 0m4.378s 00:04:09.663 user 0m1.199s 00:04:09.663 sys 0m1.793s 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:09.663 23:29:58 setup.sh.hugepages.single_node_setup -- common/autotest_common.sh@10 -- # set +x 00:04:09.663 ************************************ 00:04:09.663 END TEST single_node_setup 00:04:09.663 ************************************ 00:04:09.663 23:29:58 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:09.663 23:29:58 setup.sh.hugepages -- setup/hugepages.sh@201 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.663 23:29:58 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:09.663 23:29:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:09.663 23:29:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.663 ************************************ 00:04:09.663 START TEST even_2G_alloc 00:04:09.663 ************************************ 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1117 -- # even_2G_alloc 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@142 -- # get_test_nr_hugepages 2097152 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 512 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # NRHUGE=1024 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@143 -- # setup output 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.663 23:29:58 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:12.186 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.186 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.186 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.446 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.446 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.446 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.446 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.446 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@144 -- # verify_nr_hugepages 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@88 -- # local node 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175006076 kB' 'MemAvailable: 177931908 kB' 'Buffers: 4132 kB' 'Cached: 10213972 kB' 'SwapCached: 0 kB' 'Active: 7330484 kB' 'Inactive: 3521696 kB' 'Active(anon): 6900636 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635916 kB' 'Mapped: 210364 kB' 'Shmem: 6266560 kB' 'KReclaimable: 237008 kB' 'Slab: 824496 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 587488 kB' 'KernelStack: 21008 kB' 'PageTables: 9708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8452536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316412 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.446 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.447 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175012396 kB' 'MemAvailable: 177938228 kB' 'Buffers: 4132 kB' 'Cached: 10213976 kB' 'SwapCached: 0 kB' 'Active: 7330644 kB' 'Inactive: 3521696 kB' 'Active(anon): 6900796 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637708 kB' 'Mapped: 210300 kB' 'Shmem: 6266564 kB' 'KReclaimable: 237008 kB' 'Slab: 824516 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 587508 kB' 'KernelStack: 21024 kB' 'PageTables: 9944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8450308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316364 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.448 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.449 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175009180 kB' 'MemAvailable: 177935012 kB' 'Buffers: 4132 kB' 'Cached: 10213996 kB' 'SwapCached: 0 kB' 'Active: 7333980 kB' 'Inactive: 3521696 kB' 'Active(anon): 6904132 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641000 kB' 'Mapped: 210568 kB' 'Shmem: 6266584 kB' 'KReclaimable: 237008 kB' 'Slab: 824380 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 587372 kB' 'KernelStack: 21008 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8453508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316368 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.450 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.451 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:12.452 nr_hugepages=1024 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:12.452 resv_hugepages=0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:12.452 surplus_hugepages=0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:12.452 anon_hugepages=0 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 175014288 kB' 'MemAvailable: 177940120 kB' 'Buffers: 4132 kB' 'Cached: 10214016 kB' 'SwapCached: 0 kB' 'Active: 7330044 kB' 'Inactive: 3521696 kB' 'Active(anon): 6900196 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636940 kB' 'Mapped: 210300 kB' 'Shmem: 6266604 kB' 'KReclaimable: 237008 kB' 'Slab: 824380 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 587372 kB' 'KernelStack: 20992 kB' 'PageTables: 9832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8449808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316348 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.452 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.713 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@26 -- # local node 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91654048 kB' 'MemUsed: 5961580 kB' 'SwapCached: 0 kB' 'Active: 3266464 kB' 'Inactive: 199128 kB' 'Active(anon): 3019980 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956264 kB' 'Mapped: 165292 kB' 'AnonPages: 512640 kB' 'Shmem: 2510652 kB' 'KernelStack: 14152 kB' 'PageTables: 6972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 390808 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 284852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.714 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.715 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 83352992 kB' 'MemUsed: 10412544 kB' 'SwapCached: 0 kB' 'Active: 4068428 kB' 'Inactive: 3322568 kB' 'Active(anon): 3885064 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3322568 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7261924 kB' 'Mapped: 45420 kB' 'AnonPages: 129272 kB' 'Shmem: 3755992 kB' 'KernelStack: 6888 kB' 'PageTables: 2988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131052 kB' 'Slab: 433572 kB' 'SReclaimable: 131052 kB' 'SUnreclaim: 302520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.716 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:12.717 node0=512 expecting 512 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:12.717 node1=512 expecting 512 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@129 -- # [[ 512 == \5\1\2 ]] 00:04:12.717 00:04:12.717 real 0m3.013s 00:04:12.717 user 0m1.247s 00:04:12.717 sys 0m1.827s 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:12.717 23:30:01 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.717 ************************************ 00:04:12.717 END TEST even_2G_alloc 00:04:12.717 ************************************ 00:04:12.717 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:12.717 23:30:01 setup.sh.hugepages -- setup/hugepages.sh@202 -- # run_test odd_alloc odd_alloc 00:04:12.717 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:12.717 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:12.717 23:30:01 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.717 ************************************ 00:04:12.717 START TEST odd_alloc 00:04:12.717 ************************************ 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1117 -- # odd_alloc 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@149 -- # get_test_nr_hugepages 2098176 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@48 -- # local size=2098176 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1025 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1025 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=512 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 513 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=513 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # HUGEMEM=2049 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@150 -- # setup output 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.717 23:30:01 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:15.244 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.245 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.245 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@151 -- # verify_nr_hugepages 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@88 -- # local node 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174975096 kB' 'MemAvailable: 177900928 kB' 'Buffers: 4132 kB' 'Cached: 10214132 kB' 'SwapCached: 0 kB' 'Active: 7332424 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902576 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638556 kB' 'Mapped: 209732 kB' 'Shmem: 6266720 kB' 'KReclaimable: 237008 kB' 'Slab: 823784 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586776 kB' 'KernelStack: 20912 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8442104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316368 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.506 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.507 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174975756 kB' 'MemAvailable: 177901588 kB' 'Buffers: 4132 kB' 'Cached: 10214136 kB' 'SwapCached: 0 kB' 'Active: 7332184 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902336 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638828 kB' 'Mapped: 209652 kB' 'Shmem: 6266724 kB' 'KReclaimable: 237008 kB' 'Slab: 823792 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586784 kB' 'KernelStack: 20896 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8442120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316320 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.508 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.509 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174976292 kB' 'MemAvailable: 177902124 kB' 'Buffers: 4132 kB' 'Cached: 10214152 kB' 'SwapCached: 0 kB' 'Active: 7332196 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902348 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638824 kB' 'Mapped: 209652 kB' 'Shmem: 6266740 kB' 'KReclaimable: 237008 kB' 'Slab: 823788 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586780 kB' 'KernelStack: 20896 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8442140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316320 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.510 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1025 00:04:15.511 nr_hugepages=1025 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:15.511 resv_hugepages=0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:15.511 surplus_hugepages=0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:15.511 anon_hugepages=0 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@106 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@108 -- # (( 1025 == nr_hugepages )) 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.511 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174976916 kB' 'MemAvailable: 177902748 kB' 'Buffers: 4132 kB' 'Cached: 10214176 kB' 'SwapCached: 0 kB' 'Active: 7332220 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902372 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638824 kB' 'Mapped: 209652 kB' 'Shmem: 6266764 kB' 'KReclaimable: 237008 kB' 'Slab: 823788 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586780 kB' 'KernelStack: 20896 kB' 'PageTables: 9456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 8442160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316320 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.512 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@26 -- # local node 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=513 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91628860 kB' 'MemUsed: 5986768 kB' 'SwapCached: 0 kB' 'Active: 3266208 kB' 'Inactive: 199128 kB' 'Active(anon): 3019724 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956368 kB' 'Mapped: 164376 kB' 'AnonPages: 512160 kB' 'Shmem: 2510756 kB' 'KernelStack: 14104 kB' 'PageTables: 6940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 390348 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 284392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.513 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.514 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.773 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 83345812 kB' 'MemUsed: 10419724 kB' 'SwapCached: 0 kB' 'Active: 4067468 kB' 'Inactive: 3322568 kB' 'Active(anon): 3884104 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3322568 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7261960 kB' 'Mapped: 45276 kB' 'AnonPages: 128128 kB' 'Shmem: 3756028 kB' 'KernelStack: 6824 kB' 'PageTables: 2668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131052 kB' 'Slab: 433440 kB' 'SReclaimable: 131052 kB' 'SUnreclaim: 302388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.774 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node0=513 expecting 513' 00:04:15.775 node0=513 expecting 513 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # echo 'node1=512 expecting 512' 00:04:15.775 node1=512 expecting 512 00:04:15.775 23:30:04 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@129 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:15.776 00:04:15.776 real 0m2.934s 00:04:15.776 user 0m1.160s 00:04:15.776 sys 0m1.838s 00:04:15.776 23:30:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:15.776 23:30:04 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.776 ************************************ 00:04:15.776 END TEST odd_alloc 00:04:15.776 ************************************ 00:04:15.776 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:15.776 23:30:04 setup.sh.hugepages -- setup/hugepages.sh@203 -- # run_test custom_alloc custom_alloc 00:04:15.776 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:15.776 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:15.776 23:30:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.776 ************************************ 00:04:15.776 START TEST custom_alloc 00:04:15.776 ************************************ 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1117 -- # custom_alloc 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@157 -- # local IFS=, 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@159 -- # local node 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # nodes_hp=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@160 -- # local nodes_hp 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@162 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@164 -- # get_test_nr_hugepages 1048576 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=1048576 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=512 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=512 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 0 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 256 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 1 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # nodes_test[_no_nodes - 1]=256 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # : 0 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@80 -- # (( _no_nodes > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@165 -- # nodes_hp[0]=512 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@166 -- # (( 2 > 1 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # get_test_nr_hugepages 2097152 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # (( 1 > 1 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 1 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@168 -- # nodes_hp[1]=1024 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@171 -- # for node in "${!nodes_hp[@]}" 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@173 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # get_test_nr_hugepages_per_node 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # user_nodes=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@68 -- # (( 0 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@73 -- # (( 2 > 0 )) 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=512 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # nodes_test[_no_nodes]=1024 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@77 -- # return 0 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # setup output 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.776 23:30:04 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.301 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.301 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.301 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nr_hugepages=1536 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # verify_nr_hugepages 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@88 -- # local node 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173959544 kB' 'MemAvailable: 176885376 kB' 'Buffers: 4132 kB' 'Cached: 10214288 kB' 'SwapCached: 0 kB' 'Active: 7333008 kB' 'Inactive: 3521696 kB' 'Active(anon): 6903160 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639052 kB' 'Mapped: 209724 kB' 'Shmem: 6266876 kB' 'KReclaimable: 237008 kB' 'Slab: 822944 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585936 kB' 'KernelStack: 20912 kB' 'PageTables: 9520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8442932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316352 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.565 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.566 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173960604 kB' 'MemAvailable: 176886436 kB' 'Buffers: 4132 kB' 'Cached: 10214292 kB' 'SwapCached: 0 kB' 'Active: 7332784 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902936 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639292 kB' 'Mapped: 209628 kB' 'Shmem: 6266880 kB' 'KReclaimable: 237008 kB' 'Slab: 822932 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585924 kB' 'KernelStack: 20896 kB' 'PageTables: 9460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8442948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316336 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.567 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173960604 kB' 'MemAvailable: 176886436 kB' 'Buffers: 4132 kB' 'Cached: 10214308 kB' 'SwapCached: 0 kB' 'Active: 7332596 kB' 'Inactive: 3521696 kB' 'Active(anon): 6902748 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639068 kB' 'Mapped: 209628 kB' 'Shmem: 6266896 kB' 'KReclaimable: 237008 kB' 'Slab: 822932 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585924 kB' 'KernelStack: 20880 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8442968 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316336 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.568 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.569 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1536 00:04:18.570 nr_hugepages=1536 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:18.570 resv_hugepages=0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:18.570 surplus_hugepages=0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:18.570 anon_hugepages=0 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@106 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@108 -- # (( 1536 == nr_hugepages )) 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 173960604 kB' 'MemAvailable: 176886436 kB' 'Buffers: 4132 kB' 'Cached: 10214308 kB' 'SwapCached: 0 kB' 'Active: 7333100 kB' 'Inactive: 3521696 kB' 'Active(anon): 6903252 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 639572 kB' 'Mapped: 209628 kB' 'Shmem: 6266896 kB' 'KReclaimable: 237008 kB' 'Slab: 822932 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585924 kB' 'KernelStack: 20880 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 8442992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316336 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.570 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.571 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@26 -- # local node 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=512 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91634520 kB' 'MemUsed: 5981108 kB' 'SwapCached: 0 kB' 'Active: 3264848 kB' 'Inactive: 199128 kB' 'Active(anon): 3018364 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956508 kB' 'Mapped: 164352 kB' 'AnonPages: 510668 kB' 'Shmem: 2510896 kB' 'KernelStack: 14104 kB' 'PageTables: 6888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 390008 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 284052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.572 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 1 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765536 kB' 'MemFree: 82325920 kB' 'MemUsed: 11439616 kB' 'SwapCached: 0 kB' 'Active: 4067596 kB' 'Inactive: 3322568 kB' 'Active(anon): 3884232 kB' 'Inactive(anon): 0 kB' 'Active(file): 183364 kB' 'Inactive(file): 3322568 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7261996 kB' 'Mapped: 45276 kB' 'AnonPages: 128216 kB' 'Shmem: 3756064 kB' 'KernelStack: 6776 kB' 'PageTables: 2520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 131052 kB' 'Slab: 432924 kB' 'SReclaimable: 131052 kB' 'SUnreclaim: 301872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.573 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.574 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.575 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node0=512 expecting 512' 00:04:18.833 node0=512 expecting 512 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # echo 'node1=1024 expecting 1024' 00:04:18.833 node1=1024 expecting 1024 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@129 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:18.833 00:04:18.833 real 0m2.961s 00:04:18.833 user 0m1.215s 00:04:18.833 sys 0m1.812s 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:18.833 23:30:07 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.833 ************************************ 00:04:18.833 END TEST custom_alloc 00:04:18.833 ************************************ 00:04:18.833 23:30:07 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:18.833 23:30:07 setup.sh.hugepages -- setup/hugepages.sh@204 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.833 23:30:07 setup.sh.hugepages -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:18.833 23:30:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:18.833 23:30:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:18.833 ************************************ 00:04:18.833 START TEST no_shrink_alloc 00:04:18.833 ************************************ 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1117 -- # no_shrink_alloc 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@185 -- # get_test_nr_hugepages 2097152 0 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@48 -- # local size=2097152 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # (( 2 > 1 )) 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # shift 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # node_ids=('0') 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # local node_ids 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@54 -- # (( size >= default_hugepages )) 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@56 -- # nr_hugepages=1024 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # get_test_nr_hugepages_per_node 0 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # user_nodes=('0') 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@61 -- # local user_nodes 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@63 -- # local _nr_hugepages=1024 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _no_nodes=2 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # nodes_test=() 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@66 -- # local -g nodes_test 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@68 -- # (( 1 > 0 )) 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # for _no_nodes in "${user_nodes[@]}" 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # nodes_test[_no_nodes]=1024 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@72 -- # return 0 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # NRHUGE=1024 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # HUGENODE=0 00:04:18.833 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@188 -- # setup output 00:04:18.834 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.834 23:30:07 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:21.362 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.362 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.362 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@189 -- # verify_nr_hugepages 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174966140 kB' 'MemAvailable: 177891972 kB' 'Buffers: 4132 kB' 'Cached: 10214428 kB' 'SwapCached: 0 kB' 'Active: 7328056 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898208 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633980 kB' 'Mapped: 208816 kB' 'Shmem: 6267016 kB' 'KReclaimable: 237008 kB' 'Slab: 823064 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586056 kB' 'KernelStack: 20864 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8435836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316252 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.362 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:21.363 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174966240 kB' 'MemAvailable: 177892072 kB' 'Buffers: 4132 kB' 'Cached: 10214428 kB' 'SwapCached: 0 kB' 'Active: 7328084 kB' 'Inactive: 3521696 kB' 'Active(anon): 6898236 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 634032 kB' 'Mapped: 208796 kB' 'Shmem: 6267016 kB' 'KReclaimable: 237008 kB' 'Slab: 823040 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586032 kB' 'KernelStack: 20848 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8435856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316220 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.364 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.626 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.627 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174967868 kB' 'MemAvailable: 177893700 kB' 'Buffers: 4132 kB' 'Cached: 10214448 kB' 'SwapCached: 0 kB' 'Active: 7327288 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897440 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633664 kB' 'Mapped: 208716 kB' 'Shmem: 6267036 kB' 'KReclaimable: 237008 kB' 'Slab: 823020 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586012 kB' 'KernelStack: 20832 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8435876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316204 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.628 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:21.629 nr_hugepages=1024 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:21.629 resv_hugepages=0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:21.629 surplus_hugepages=0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:21.629 anon_hugepages=0 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174967500 kB' 'MemAvailable: 177893332 kB' 'Buffers: 4132 kB' 'Cached: 10214472 kB' 'SwapCached: 0 kB' 'Active: 7327280 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897432 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633664 kB' 'Mapped: 208716 kB' 'Shmem: 6267060 kB' 'KReclaimable: 237008 kB' 'Slab: 823020 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 586012 kB' 'KernelStack: 20832 kB' 'PageTables: 9228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8435900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316204 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.629 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.630 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90563232 kB' 'MemUsed: 7052396 kB' 'SwapCached: 0 kB' 'Active: 3259264 kB' 'Inactive: 199128 kB' 'Active(anon): 3012780 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956620 kB' 'Mapped: 163448 kB' 'AnonPages: 505000 kB' 'Shmem: 2511008 kB' 'KernelStack: 14056 kB' 'PageTables: 6700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 390112 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 284156 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.631 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.632 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:21.633 node0=1024 expecting 1024 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # CLEAR_HUGE=no 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # NRHUGE=512 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # HUGENODE=0 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@192 -- # setup output 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.633 23:30:10 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:24.160 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.161 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:24.161 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:24.161 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:24.433 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@194 -- # verify_nr_hugepages 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@88 -- # local node 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local sorted_t 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_s 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local surp 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local resv 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local anon 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@95 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # get_meminfo AnonHugePages 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174979188 kB' 'MemAvailable: 177905020 kB' 'Buffers: 4132 kB' 'Cached: 10214568 kB' 'SwapCached: 0 kB' 'Active: 7327804 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897956 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633568 kB' 'Mapped: 208816 kB' 'Shmem: 6267156 kB' 'KReclaimable: 237008 kB' 'Slab: 822628 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585620 kB' 'KernelStack: 20848 kB' 'PageTables: 9280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8434688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316236 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.434 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # anon=0 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # get_meminfo HugePages_Surp 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174979836 kB' 'MemAvailable: 177905668 kB' 'Buffers: 4132 kB' 'Cached: 10214568 kB' 'SwapCached: 0 kB' 'Active: 7327000 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897152 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633268 kB' 'Mapped: 208728 kB' 'Shmem: 6267156 kB' 'KReclaimable: 237008 kB' 'Slab: 822572 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585564 kB' 'KernelStack: 20816 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8434704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316236 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.435 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@98 -- # surp=0 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Rsvd 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174980824 kB' 'MemAvailable: 177906656 kB' 'Buffers: 4132 kB' 'Cached: 10214588 kB' 'SwapCached: 0 kB' 'Active: 7327024 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897176 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633264 kB' 'Mapped: 208728 kB' 'Shmem: 6267176 kB' 'KReclaimable: 237008 kB' 'Slab: 822572 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585564 kB' 'KernelStack: 20816 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8434728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316236 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.436 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.437 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # resv=0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@101 -- # echo nr_hugepages=1024 00:04:24.438 nr_hugepages=1024 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo resv_hugepages=0 00:04:24.438 resv_hugepages=0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo surplus_hugepages=0 00:04:24.438 surplus_hugepages=0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo anon_hugepages=0 00:04:24.438 anon_hugepages=0 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@106 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@108 -- # (( 1024 == nr_hugepages )) 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # get_meminfo HugePages_Total 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381164 kB' 'MemFree: 174982300 kB' 'MemAvailable: 177908132 kB' 'Buffers: 4132 kB' 'Cached: 10214608 kB' 'SwapCached: 0 kB' 'Active: 7327044 kB' 'Inactive: 3521696 kB' 'Active(anon): 6897196 kB' 'Inactive(anon): 0 kB' 'Active(file): 429848 kB' 'Inactive(file): 3521696 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 633268 kB' 'Mapped: 208728 kB' 'Shmem: 6267196 kB' 'KReclaimable: 237008 kB' 'Slab: 822572 kB' 'SReclaimable: 237008 kB' 'SUnreclaim: 585564 kB' 'KernelStack: 20816 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 8434752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316236 kB' 'VmallocChunk: 0 kB' 'Percpu: 72960 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3048404 kB' 'DirectMap2M: 38574080 kB' 'DirectMap1G: 160432128 kB' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.438 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@111 -- # get_nodes 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@26 -- # local node 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=1024 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@28 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # nodes_sys[${node##*node}]=0 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@31 -- # no_nodes=2 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # (( no_nodes > 0 )) 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@114 -- # for node in "${!nodes_test[@]}" 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # (( nodes_test[node] += resv )) 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # get_meminfo HugePages_Surp 0 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 90568036 kB' 'MemUsed: 7047592 kB' 'SwapCached: 0 kB' 'Active: 3262324 kB' 'Inactive: 199128 kB' 'Active(anon): 3015840 kB' 'Inactive(anon): 0 kB' 'Active(file): 246484 kB' 'Inactive(file): 199128 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2956748 kB' 'Mapped: 163964 kB' 'AnonPages: 507924 kB' 'Shmem: 2511136 kB' 'KernelStack: 14024 kB' 'PageTables: 6640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 105956 kB' 'Slab: 389740 kB' 'SReclaimable: 105956 kB' 'SUnreclaim: 283784 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.439 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += 0 )) 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@125 -- # for node in "${!nodes_test[@]}" 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_t[nodes_test[node]]=1 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # sorted_s[nodes_sys[node]]=1 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # echo 'node0=1024 expecting 1024' 00:04:24.440 node0=1024 expecting 1024 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@129 -- # [[ 1024 == \1\0\2\4 ]] 00:04:24.440 00:04:24.440 real 0m5.743s 00:04:24.440 user 0m2.394s 00:04:24.440 sys 0m3.449s 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:24.440 23:30:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:24.440 ************************************ 00:04:24.440 END TEST no_shrink_alloc 00:04:24.440 ************************************ 00:04:24.440 23:30:13 setup.sh.hugepages -- common/autotest_common.sh@1136 -- # return 0 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@206 -- # clear_hp 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@36 -- # local node hp 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@38 -- # for node in "${!nodes_sys[@]}" 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@40 -- # echo 0 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@44 -- # export CLEAR_HUGE=yes 00:04:24.440 23:30:13 setup.sh.hugepages -- setup/hugepages.sh@44 -- # CLEAR_HUGE=yes 00:04:24.440 00:04:24.440 real 0m19.514s 00:04:24.440 user 0m7.449s 00:04:24.440 sys 0m11.005s 00:04:24.440 23:30:13 setup.sh.hugepages -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:24.440 23:30:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:24.440 ************************************ 00:04:24.440 END TEST hugepages 00:04:24.440 ************************************ 00:04:24.698 23:30:13 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:24.698 23:30:13 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:24.698 23:30:13 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:24.698 23:30:13 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:24.698 23:30:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:24.698 ************************************ 00:04:24.698 START TEST driver 00:04:24.698 ************************************ 00:04:24.698 23:30:13 setup.sh.driver -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:24.698 * Looking for test storage... 00:04:24.698 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:24.698 23:30:13 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:24.698 23:30:13 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.698 23:30:13 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.881 23:30:17 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:28.881 23:30:17 setup.sh.driver -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:28.881 23:30:17 setup.sh.driver -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:28.881 23:30:17 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:28.881 ************************************ 00:04:28.881 START TEST guess_driver 00:04:28.881 ************************************ 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- common/autotest_common.sh@1117 -- # guess_driver 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:28.881 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:28.881 Looking for driver=vfio-pci 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.881 23:30:17 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:31.404 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.404 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.404 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.404 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.404 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.405 23:30:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.778 23:30:21 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.778 23:30:21 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.778 23:30:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.035 23:30:21 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.035 23:30:21 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.035 23:30:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.035 23:30:21 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.209 00:04:37.209 real 0m8.224s 00:04:37.209 user 0m2.248s 00:04:37.209 sys 0m3.849s 00:04:37.209 23:30:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:37.209 23:30:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.209 ************************************ 00:04:37.209 END TEST guess_driver 00:04:37.209 ************************************ 00:04:37.209 23:30:25 setup.sh.driver -- common/autotest_common.sh@1136 -- # return 0 00:04:37.209 00:04:37.209 real 0m12.196s 00:04:37.209 user 0m3.352s 00:04:37.209 sys 0m5.944s 00:04:37.209 23:30:25 setup.sh.driver -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:37.209 23:30:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.209 ************************************ 00:04:37.209 END TEST driver 00:04:37.209 ************************************ 00:04:37.209 23:30:25 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:37.209 23:30:25 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:37.209 23:30:25 setup.sh -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:37.209 23:30:25 setup.sh -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:37.209 23:30:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.209 ************************************ 00:04:37.209 START TEST devices 00:04:37.209 ************************************ 00:04:37.209 23:30:25 setup.sh.devices -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:37.209 * Looking for test storage... 00:04:37.209 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:37.209 23:30:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:37.209 23:30:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:37.209 23:30:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.209 23:30:25 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1663 -- # zoned_devs=() 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1663 -- # local -gA zoned_devs 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1664 -- # local nvme bdf 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1666 -- # for nvme in /sys/block/nvme* 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1667 -- # is_block_zoned nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:39.731 23:30:28 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:39.731 No valid GPT data, bailing 00:04:39.731 23:30:28 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:39.731 23:30:28 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:39.731 23:30:28 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:39.731 23:30:28 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:39.731 ************************************ 00:04:39.731 START TEST nvme_mount 00:04:39.731 ************************************ 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1117 -- # nvme_mount 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:39.731 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:39.732 23:30:28 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:40.667 Creating new GPT entries in memory. 00:04:40.667 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:40.667 other utilities. 00:04:40.667 23:30:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:40.667 23:30:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.667 23:30:29 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:40.667 23:30:29 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:40.667 23:30:29 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:42.041 Creating new GPT entries in memory. 00:04:42.041 The operation has completed successfully. 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1273635 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.041 23:30:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.573 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.573 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:44.832 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:44.832 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:44.832 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:44.832 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.832 23:30:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.362 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:47.363 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.620 23:30:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:50.145 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:50.403 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:50.403 00:04:50.403 real 0m10.633s 00:04:50.403 user 0m3.132s 00:04:50.403 sys 0m5.308s 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:50.403 23:30:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 ************************************ 00:04:50.403 END TEST nvme_mount 00:04:50.403 ************************************ 00:04:50.403 23:30:39 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:50.403 23:30:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:50.403 23:30:39 setup.sh.devices -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:04:50.403 23:30:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # xtrace_disable 00:04:50.403 23:30:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:50.403 ************************************ 00:04:50.403 START TEST dm_mount 00:04:50.403 ************************************ 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1117 -- # dm_mount 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:50.403 23:30:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:51.779 Creating new GPT entries in memory. 00:04:51.779 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:51.779 other utilities. 00:04:51.779 23:30:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:51.779 23:30:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.779 23:30:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:51.779 23:30:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:51.779 23:30:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:52.716 Creating new GPT entries in memory. 00:04:52.716 The operation has completed successfully. 00:04:52.716 23:30:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:52.716 23:30:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:52.716 23:30:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:52.716 23:30:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:52.716 23:30:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:53.652 The operation has completed successfully. 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1277807 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.652 23:30:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:56.183 23:30:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.183 23:30:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:59.538 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.538 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:59.538 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:59.539 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:59.539 00:04:59.539 real 0m8.661s 00:04:59.539 user 0m2.062s 00:04:59.539 sys 0m3.649s 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:59.539 23:30:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:59.539 ************************************ 00:04:59.539 END TEST dm_mount 00:04:59.539 ************************************ 00:04:59.539 23:30:48 setup.sh.devices -- common/autotest_common.sh@1136 -- # return 0 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:59.539 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:59.539 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:59.539 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:59.539 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:59.539 23:30:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:59.539 00:04:59.539 real 0m22.568s 00:04:59.539 user 0m6.260s 00:04:59.539 sys 0m10.990s 00:04:59.539 23:30:48 setup.sh.devices -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:59.539 23:30:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:59.539 ************************************ 00:04:59.539 END TEST devices 00:04:59.539 ************************************ 00:04:59.539 23:30:48 setup.sh -- common/autotest_common.sh@1136 -- # return 0 00:04:59.539 00:04:59.539 real 1m14.754s 00:04:59.539 user 0m23.735s 00:04:59.539 sys 0m39.796s 00:04:59.539 23:30:48 setup.sh -- common/autotest_common.sh@1118 -- # xtrace_disable 00:04:59.539 23:30:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.539 ************************************ 00:04:59.539 END TEST setup.sh 00:04:59.539 ************************************ 00:04:59.539 23:30:48 -- common/autotest_common.sh@1136 -- # return 0 00:04:59.539 23:30:48 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:02.139 Hugepages 00:05:02.139 node hugesize free / total 00:05:02.139 node0 1048576kB 0 / 0 00:05:02.139 node0 2048kB 1024 / 1024 00:05:02.139 node1 1048576kB 0 / 0 00:05:02.139 node1 2048kB 1024 / 1024 00:05:02.139 00:05:02.139 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.139 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:02.139 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:02.139 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:02.139 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:02.139 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:02.140 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:02.140 23:30:51 -- spdk/autotest.sh@130 -- # uname -s 00:05:02.140 23:30:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:02.140 23:30:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:02.140 23:30:51 -- common/autotest_common.sh@1525 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:04.670 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:04.670 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:06.044 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:06.044 23:30:54 -- common/autotest_common.sh@1526 -- # sleep 1 00:05:07.417 23:30:55 -- common/autotest_common.sh@1527 -- # bdfs=() 00:05:07.417 23:30:55 -- common/autotest_common.sh@1527 -- # local bdfs 00:05:07.417 23:30:55 -- common/autotest_common.sh@1528 -- # bdfs=($(get_nvme_bdfs)) 00:05:07.417 23:30:55 -- common/autotest_common.sh@1528 -- # get_nvme_bdfs 00:05:07.417 23:30:55 -- common/autotest_common.sh@1507 -- # bdfs=() 00:05:07.417 23:30:55 -- common/autotest_common.sh@1507 -- # local bdfs 00:05:07.417 23:30:55 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:07.417 23:30:55 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:07.417 23:30:55 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:05:07.417 23:30:56 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:05:07.417 23:30:56 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5f:00.0 00:05:07.417 23:30:56 -- common/autotest_common.sh@1530 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.307 Waiting for block devices as requested 00:05:09.307 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:05:09.564 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:09.564 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:09.564 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:09.821 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:09.821 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:09.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:09.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:10.076 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:10.076 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:10.076 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:10.331 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:10.331 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:10.331 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:10.331 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:10.586 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:10.586 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:10.586 23:30:59 -- common/autotest_common.sh@1532 -- # for bdf in "${bdfs[@]}" 00:05:10.586 23:30:59 -- common/autotest_common.sh@1533 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1496 -- # readlink -f /sys/class/nvme/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1496 -- # grep 0000:5f:00.0/nvme/nvme 00:05:10.586 23:30:59 -- common/autotest_common.sh@1496 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1497 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:05:10.586 23:30:59 -- common/autotest_common.sh@1501 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1501 -- # printf '%s\n' nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1533 -- # nvme_ctrlr=/dev/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1534 -- # [[ -z /dev/nvme0 ]] 00:05:10.586 23:30:59 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1539 -- # grep oacs 00:05:10.586 23:30:59 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:10.586 23:30:59 -- common/autotest_common.sh@1539 -- # oacs=' 0xe' 00:05:10.586 23:30:59 -- common/autotest_common.sh@1540 -- # oacs_ns_manage=8 00:05:10.586 23:30:59 -- common/autotest_common.sh@1542 -- # [[ 8 -ne 0 ]] 00:05:10.586 23:30:59 -- common/autotest_common.sh@1548 -- # nvme id-ctrl /dev/nvme0 00:05:10.586 23:30:59 -- common/autotest_common.sh@1548 -- # grep unvmcap 00:05:10.586 23:30:59 -- common/autotest_common.sh@1548 -- # cut -d: -f2 00:05:10.841 23:30:59 -- common/autotest_common.sh@1548 -- # unvmcap=' 0' 00:05:10.841 23:30:59 -- common/autotest_common.sh@1549 -- # [[ 0 -eq 0 ]] 00:05:10.841 23:30:59 -- common/autotest_common.sh@1551 -- # continue 00:05:10.841 23:30:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:10.841 23:30:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.841 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.841 23:30:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:10.841 23:30:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:10.841 23:30:59 -- common/autotest_common.sh@10 -- # set +x 00:05:10.841 23:30:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:13.366 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:13.366 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:13.624 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:13.624 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:13.624 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:14.995 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:05:14.995 23:31:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:14.995 23:31:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.995 23:31:03 -- common/autotest_common.sh@10 -- # set +x 00:05:14.995 23:31:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:14.995 23:31:03 -- common/autotest_common.sh@1585 -- # mapfile -t bdfs 00:05:14.995 23:31:03 -- common/autotest_common.sh@1585 -- # get_nvme_bdfs_by_id 0x0a54 00:05:14.995 23:31:03 -- common/autotest_common.sh@1571 -- # bdfs=() 00:05:14.995 23:31:03 -- common/autotest_common.sh@1571 -- # local bdfs 00:05:14.995 23:31:03 -- common/autotest_common.sh@1573 -- # get_nvme_bdfs 00:05:14.995 23:31:03 -- common/autotest_common.sh@1507 -- # bdfs=() 00:05:14.995 23:31:03 -- common/autotest_common.sh@1507 -- # local bdfs 00:05:14.995 23:31:03 -- common/autotest_common.sh@1508 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:14.995 23:31:03 -- common/autotest_common.sh@1508 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:14.995 23:31:03 -- common/autotest_common.sh@1508 -- # jq -r '.config[].params.traddr' 00:05:15.253 23:31:04 -- common/autotest_common.sh@1509 -- # (( 1 == 0 )) 00:05:15.253 23:31:04 -- common/autotest_common.sh@1513 -- # printf '%s\n' 0000:5f:00.0 00:05:15.253 23:31:04 -- common/autotest_common.sh@1573 -- # for bdf in $(get_nvme_bdfs) 00:05:15.253 23:31:04 -- common/autotest_common.sh@1574 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:05:15.253 23:31:04 -- common/autotest_common.sh@1574 -- # device=0x0a54 00:05:15.253 23:31:04 -- common/autotest_common.sh@1575 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:15.253 23:31:04 -- common/autotest_common.sh@1576 -- # bdfs+=($bdf) 00:05:15.253 23:31:04 -- common/autotest_common.sh@1580 -- # printf '%s\n' 0000:5f:00.0 00:05:15.253 23:31:04 -- common/autotest_common.sh@1586 -- # [[ -z 0000:5f:00.0 ]] 00:05:15.253 23:31:04 -- common/autotest_common.sh@1591 -- # spdk_tgt_pid=1286618 00:05:15.253 23:31:04 -- common/autotest_common.sh@1592 -- # waitforlisten 1286618 00:05:15.253 23:31:04 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.253 23:31:04 -- common/autotest_common.sh@823 -- # '[' -z 1286618 ']' 00:05:15.253 23:31:04 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.253 23:31:04 -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:15.253 23:31:04 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.253 23:31:04 -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:15.253 23:31:04 -- common/autotest_common.sh@10 -- # set +x 00:05:15.253 [2024-07-15 23:31:04.081750] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:15.253 [2024-07-15 23:31:04.081790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1286618 ] 00:05:15.253 [2024-07-15 23:31:04.136498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.253 [2024-07-15 23:31:04.213623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.187 23:31:04 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:16.187 23:31:04 -- common/autotest_common.sh@856 -- # return 0 00:05:16.187 23:31:04 -- common/autotest_common.sh@1594 -- # bdf_id=0 00:05:16.187 23:31:04 -- common/autotest_common.sh@1595 -- # for bdf in "${bdfs[@]}" 00:05:16.187 23:31:04 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:05:19.463 nvme0n1 00:05:19.463 23:31:07 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:19.463 [2024-07-15 23:31:07.998528] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:19.463 request: 00:05:19.463 { 00:05:19.463 "nvme_ctrlr_name": "nvme0", 00:05:19.463 "password": "test", 00:05:19.463 "method": "bdev_nvme_opal_revert", 00:05:19.463 "req_id": 1 00:05:19.463 } 00:05:19.463 Got JSON-RPC error response 00:05:19.463 response: 00:05:19.463 { 00:05:19.463 "code": -32602, 00:05:19.463 "message": "Invalid parameters" 00:05:19.463 } 00:05:19.463 23:31:08 -- common/autotest_common.sh@1598 -- # true 00:05:19.463 23:31:08 -- common/autotest_common.sh@1599 -- # (( ++bdf_id )) 00:05:19.463 23:31:08 -- common/autotest_common.sh@1602 -- # killprocess 1286618 00:05:19.463 23:31:08 -- common/autotest_common.sh@942 -- # '[' -z 1286618 ']' 00:05:19.463 23:31:08 -- common/autotest_common.sh@946 -- # kill -0 1286618 00:05:19.463 23:31:08 -- common/autotest_common.sh@947 -- # uname 00:05:19.463 23:31:08 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:19.463 23:31:08 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1286618 00:05:19.463 23:31:08 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:19.463 23:31:08 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:19.463 23:31:08 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1286618' 00:05:19.463 killing process with pid 1286618 00:05:19.463 23:31:08 -- common/autotest_common.sh@961 -- # kill 1286618 00:05:19.463 23:31:08 -- common/autotest_common.sh@966 -- # wait 1286618 00:05:21.357 23:31:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:21.358 23:31:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:21.358 23:31:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:21.358 23:31:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:21.358 23:31:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:21.358 23:31:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:21.358 23:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.358 23:31:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:21.358 23:31:10 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:21.358 23:31:10 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.358 23:31:10 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.358 23:31:10 -- common/autotest_common.sh@10 -- # set +x 00:05:21.358 ************************************ 00:05:21.358 START TEST env 00:05:21.358 ************************************ 00:05:21.358 23:31:10 env -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:21.615 * Looking for test storage... 00:05:21.615 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:21.615 23:31:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.615 23:31:10 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.615 23:31:10 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.615 23:31:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.615 ************************************ 00:05:21.615 START TEST env_memory 00:05:21.615 ************************************ 00:05:21.615 23:31:10 env.env_memory -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:21.615 00:05:21.615 00:05:21.615 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.615 http://cunit.sourceforge.net/ 00:05:21.615 00:05:21.615 00:05:21.615 Suite: memory 00:05:21.615 Test: alloc and free memory map ...[2024-07-15 23:31:10.426813] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:21.615 passed 00:05:21.615 Test: mem map translation ...[2024-07-15 23:31:10.444455] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:21.615 [2024-07-15 23:31:10.444469] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:21.615 [2024-07-15 23:31:10.444505] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:21.615 [2024-07-15 23:31:10.444512] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:21.615 passed 00:05:21.615 Test: mem map registration ...[2024-07-15 23:31:10.480411] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:21.615 [2024-07-15 23:31:10.480429] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:21.615 passed 00:05:21.615 Test: mem map adjacent registrations ...passed 00:05:21.615 00:05:21.615 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.615 suites 1 1 n/a 0 0 00:05:21.615 tests 4 4 4 0 0 00:05:21.615 asserts 152 152 152 0 n/a 00:05:21.615 00:05:21.615 Elapsed time = 0.135 seconds 00:05:21.615 00:05:21.615 real 0m0.146s 00:05:21.615 user 0m0.140s 00:05:21.615 sys 0m0.006s 00:05:21.615 23:31:10 env.env_memory -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:21.615 23:31:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:21.615 ************************************ 00:05:21.615 END TEST env_memory 00:05:21.615 ************************************ 00:05:21.615 23:31:10 env -- common/autotest_common.sh@1136 -- # return 0 00:05:21.615 23:31:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:21.615 23:31:10 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:21.615 23:31:10 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:21.615 23:31:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:21.615 ************************************ 00:05:21.615 START TEST env_vtophys 00:05:21.615 ************************************ 00:05:21.615 23:31:10 env.env_vtophys -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:21.874 EAL: lib.eal log level changed from notice to debug 00:05:21.874 EAL: Detected lcore 0 as core 0 on socket 0 00:05:21.874 EAL: Detected lcore 1 as core 1 on socket 0 00:05:21.874 EAL: Detected lcore 2 as core 2 on socket 0 00:05:21.874 EAL: Detected lcore 3 as core 3 on socket 0 00:05:21.874 EAL: Detected lcore 4 as core 4 on socket 0 00:05:21.874 EAL: Detected lcore 5 as core 5 on socket 0 00:05:21.874 EAL: Detected lcore 6 as core 6 on socket 0 00:05:21.874 EAL: Detected lcore 7 as core 9 on socket 0 00:05:21.874 EAL: Detected lcore 8 as core 10 on socket 0 00:05:21.874 EAL: Detected lcore 9 as core 11 on socket 0 00:05:21.874 EAL: Detected lcore 10 as core 12 on socket 0 00:05:21.874 EAL: Detected lcore 11 as core 13 on socket 0 00:05:21.874 EAL: Detected lcore 12 as core 16 on socket 0 00:05:21.874 EAL: Detected lcore 13 as core 17 on socket 0 00:05:21.874 EAL: Detected lcore 14 as core 18 on socket 0 00:05:21.874 EAL: Detected lcore 15 as core 19 on socket 0 00:05:21.874 EAL: Detected lcore 16 as core 20 on socket 0 00:05:21.874 EAL: Detected lcore 17 as core 21 on socket 0 00:05:21.874 EAL: Detected lcore 18 as core 24 on socket 0 00:05:21.874 EAL: Detected lcore 19 as core 25 on socket 0 00:05:21.874 EAL: Detected lcore 20 as core 26 on socket 0 00:05:21.874 EAL: Detected lcore 21 as core 27 on socket 0 00:05:21.874 EAL: Detected lcore 22 as core 28 on socket 0 00:05:21.874 EAL: Detected lcore 23 as core 29 on socket 0 00:05:21.874 EAL: Detected lcore 24 as core 0 on socket 1 00:05:21.874 EAL: Detected lcore 25 as core 1 on socket 1 00:05:21.874 EAL: Detected lcore 26 as core 2 on socket 1 00:05:21.874 EAL: Detected lcore 27 as core 3 on socket 1 00:05:21.874 EAL: Detected lcore 28 as core 4 on socket 1 00:05:21.874 EAL: Detected lcore 29 as core 5 on socket 1 00:05:21.874 EAL: Detected lcore 30 as core 6 on socket 1 00:05:21.874 EAL: Detected lcore 31 as core 8 on socket 1 00:05:21.874 EAL: Detected lcore 32 as core 9 on socket 1 00:05:21.874 EAL: Detected lcore 33 as core 10 on socket 1 00:05:21.874 EAL: Detected lcore 34 as core 11 on socket 1 00:05:21.874 EAL: Detected lcore 35 as core 12 on socket 1 00:05:21.874 EAL: Detected lcore 36 as core 13 on socket 1 00:05:21.874 EAL: Detected lcore 37 as core 16 on socket 1 00:05:21.874 EAL: Detected lcore 38 as core 17 on socket 1 00:05:21.874 EAL: Detected lcore 39 as core 18 on socket 1 00:05:21.874 EAL: Detected lcore 40 as core 19 on socket 1 00:05:21.874 EAL: Detected lcore 41 as core 20 on socket 1 00:05:21.874 EAL: Detected lcore 42 as core 21 on socket 1 00:05:21.874 EAL: Detected lcore 43 as core 25 on socket 1 00:05:21.874 EAL: Detected lcore 44 as core 26 on socket 1 00:05:21.874 EAL: Detected lcore 45 as core 27 on socket 1 00:05:21.874 EAL: Detected lcore 46 as core 28 on socket 1 00:05:21.874 EAL: Detected lcore 47 as core 29 on socket 1 00:05:21.874 EAL: Detected lcore 48 as core 0 on socket 0 00:05:21.874 EAL: Detected lcore 49 as core 1 on socket 0 00:05:21.874 EAL: Detected lcore 50 as core 2 on socket 0 00:05:21.874 EAL: Detected lcore 51 as core 3 on socket 0 00:05:21.874 EAL: Detected lcore 52 as core 4 on socket 0 00:05:21.874 EAL: Detected lcore 53 as core 5 on socket 0 00:05:21.874 EAL: Detected lcore 54 as core 6 on socket 0 00:05:21.874 EAL: Detected lcore 55 as core 9 on socket 0 00:05:21.874 EAL: Detected lcore 56 as core 10 on socket 0 00:05:21.874 EAL: Detected lcore 57 as core 11 on socket 0 00:05:21.874 EAL: Detected lcore 58 as core 12 on socket 0 00:05:21.874 EAL: Detected lcore 59 as core 13 on socket 0 00:05:21.874 EAL: Detected lcore 60 as core 16 on socket 0 00:05:21.874 EAL: Detected lcore 61 as core 17 on socket 0 00:05:21.874 EAL: Detected lcore 62 as core 18 on socket 0 00:05:21.874 EAL: Detected lcore 63 as core 19 on socket 0 00:05:21.874 EAL: Detected lcore 64 as core 20 on socket 0 00:05:21.874 EAL: Detected lcore 65 as core 21 on socket 0 00:05:21.874 EAL: Detected lcore 66 as core 24 on socket 0 00:05:21.874 EAL: Detected lcore 67 as core 25 on socket 0 00:05:21.874 EAL: Detected lcore 68 as core 26 on socket 0 00:05:21.874 EAL: Detected lcore 69 as core 27 on socket 0 00:05:21.874 EAL: Detected lcore 70 as core 28 on socket 0 00:05:21.874 EAL: Detected lcore 71 as core 29 on socket 0 00:05:21.874 EAL: Detected lcore 72 as core 0 on socket 1 00:05:21.874 EAL: Detected lcore 73 as core 1 on socket 1 00:05:21.874 EAL: Detected lcore 74 as core 2 on socket 1 00:05:21.874 EAL: Detected lcore 75 as core 3 on socket 1 00:05:21.874 EAL: Detected lcore 76 as core 4 on socket 1 00:05:21.874 EAL: Detected lcore 77 as core 5 on socket 1 00:05:21.874 EAL: Detected lcore 78 as core 6 on socket 1 00:05:21.874 EAL: Detected lcore 79 as core 8 on socket 1 00:05:21.874 EAL: Detected lcore 80 as core 9 on socket 1 00:05:21.874 EAL: Detected lcore 81 as core 10 on socket 1 00:05:21.874 EAL: Detected lcore 82 as core 11 on socket 1 00:05:21.874 EAL: Detected lcore 83 as core 12 on socket 1 00:05:21.874 EAL: Detected lcore 84 as core 13 on socket 1 00:05:21.874 EAL: Detected lcore 85 as core 16 on socket 1 00:05:21.874 EAL: Detected lcore 86 as core 17 on socket 1 00:05:21.874 EAL: Detected lcore 87 as core 18 on socket 1 00:05:21.874 EAL: Detected lcore 88 as core 19 on socket 1 00:05:21.874 EAL: Detected lcore 89 as core 20 on socket 1 00:05:21.874 EAL: Detected lcore 90 as core 21 on socket 1 00:05:21.874 EAL: Detected lcore 91 as core 25 on socket 1 00:05:21.874 EAL: Detected lcore 92 as core 26 on socket 1 00:05:21.874 EAL: Detected lcore 93 as core 27 on socket 1 00:05:21.874 EAL: Detected lcore 94 as core 28 on socket 1 00:05:21.874 EAL: Detected lcore 95 as core 29 on socket 1 00:05:21.874 EAL: Maximum logical cores by configuration: 128 00:05:21.874 EAL: Detected CPU lcores: 96 00:05:21.874 EAL: Detected NUMA nodes: 2 00:05:21.874 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:21.874 EAL: Detected shared linkage of DPDK 00:05:21.874 EAL: No shared files mode enabled, IPC will be disabled 00:05:21.874 EAL: Bus pci wants IOVA as 'DC' 00:05:21.874 EAL: Buses did not request a specific IOVA mode. 00:05:21.874 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:21.874 EAL: Selected IOVA mode 'VA' 00:05:21.874 EAL: Probing VFIO support... 00:05:21.874 EAL: IOMMU type 1 (Type 1) is supported 00:05:21.874 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:21.874 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:21.874 EAL: VFIO support initialized 00:05:21.874 EAL: Ask a virtual area of 0x2e000 bytes 00:05:21.874 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:21.874 EAL: Setting up physically contiguous memory... 00:05:21.874 EAL: Setting maximum number of open files to 524288 00:05:21.874 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:21.874 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:21.874 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:21.874 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:21.874 EAL: Ask a virtual area of 0x61000 bytes 00:05:21.874 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:21.874 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:21.874 EAL: Ask a virtual area of 0x400000000 bytes 00:05:21.874 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:21.874 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:21.874 EAL: Hugepages will be freed exactly as allocated. 00:05:21.874 EAL: No shared files mode enabled, IPC is disabled 00:05:21.874 EAL: No shared files mode enabled, IPC is disabled 00:05:21.874 EAL: TSC frequency is ~2100000 KHz 00:05:21.874 EAL: Main lcore 0 is ready (tid=7f4ed1971a00;cpuset=[0]) 00:05:21.874 EAL: Trying to obtain current memory policy. 00:05:21.874 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.874 EAL: Restoring previous memory policy: 0 00:05:21.874 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 2MB 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:21.875 EAL: Mem event callback 'spdk:(nil)' registered 00:05:21.875 00:05:21.875 00:05:21.875 CUnit - A unit testing framework for C - Version 2.1-3 00:05:21.875 http://cunit.sourceforge.net/ 00:05:21.875 00:05:21.875 00:05:21.875 Suite: components_suite 00:05:21.875 Test: vtophys_malloc_test ...passed 00:05:21.875 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 4MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 4MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 6MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 6MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 10MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 10MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 18MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 18MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 34MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 34MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 66MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 66MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 130MB 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was shrunk by 130MB 00:05:21.875 EAL: Trying to obtain current memory policy. 00:05:21.875 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:21.875 EAL: Restoring previous memory policy: 4 00:05:21.875 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.875 EAL: request: mp_malloc_sync 00:05:21.875 EAL: No shared files mode enabled, IPC is disabled 00:05:21.875 EAL: Heap on socket 0 was expanded by 258MB 00:05:22.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.132 EAL: request: mp_malloc_sync 00:05:22.132 EAL: No shared files mode enabled, IPC is disabled 00:05:22.132 EAL: Heap on socket 0 was shrunk by 258MB 00:05:22.132 EAL: Trying to obtain current memory policy. 00:05:22.132 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.132 EAL: Restoring previous memory policy: 4 00:05:22.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.132 EAL: request: mp_malloc_sync 00:05:22.132 EAL: No shared files mode enabled, IPC is disabled 00:05:22.132 EAL: Heap on socket 0 was expanded by 514MB 00:05:22.132 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.390 EAL: request: mp_malloc_sync 00:05:22.390 EAL: No shared files mode enabled, IPC is disabled 00:05:22.390 EAL: Heap on socket 0 was shrunk by 514MB 00:05:22.390 EAL: Trying to obtain current memory policy. 00:05:22.390 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:22.390 EAL: Restoring previous memory policy: 4 00:05:22.390 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.390 EAL: request: mp_malloc_sync 00:05:22.390 EAL: No shared files mode enabled, IPC is disabled 00:05:22.390 EAL: Heap on socket 0 was expanded by 1026MB 00:05:22.647 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.904 EAL: request: mp_malloc_sync 00:05:22.904 EAL: No shared files mode enabled, IPC is disabled 00:05:22.904 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.904 passed 00:05:22.904 00:05:22.904 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.904 suites 1 1 n/a 0 0 00:05:22.904 tests 2 2 2 0 0 00:05:22.905 asserts 497 497 497 0 n/a 00:05:22.905 00:05:22.905 Elapsed time = 0.959 seconds 00:05:22.905 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.905 EAL: request: mp_malloc_sync 00:05:22.905 EAL: No shared files mode enabled, IPC is disabled 00:05:22.905 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.905 EAL: No shared files mode enabled, IPC is disabled 00:05:22.905 EAL: No shared files mode enabled, IPC is disabled 00:05:22.905 EAL: No shared files mode enabled, IPC is disabled 00:05:22.905 00:05:22.905 real 0m1.070s 00:05:22.905 user 0m0.625s 00:05:22.905 sys 0m0.419s 00:05:22.905 23:31:11 env.env_vtophys -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:22.905 23:31:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.905 ************************************ 00:05:22.905 END TEST env_vtophys 00:05:22.905 ************************************ 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1136 -- # return 0 00:05:22.905 23:31:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:22.905 23:31:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.905 ************************************ 00:05:22.905 START TEST env_pci 00:05:22.905 ************************************ 00:05:22.905 23:31:11 env.env_pci -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.905 00:05:22.905 00:05:22.905 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.905 http://cunit.sourceforge.net/ 00:05:22.905 00:05:22.905 00:05:22.905 Suite: pci 00:05:22.905 Test: pci_hook ...[2024-07-15 23:31:11.745721] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1287918 has claimed it 00:05:22.905 EAL: Cannot find device (10000:00:01.0) 00:05:22.905 EAL: Failed to attach device on primary process 00:05:22.905 passed 00:05:22.905 00:05:22.905 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.905 suites 1 1 n/a 0 0 00:05:22.905 tests 1 1 1 0 0 00:05:22.905 asserts 25 25 25 0 n/a 00:05:22.905 00:05:22.905 Elapsed time = 0.026 seconds 00:05:22.905 00:05:22.905 real 0m0.044s 00:05:22.905 user 0m0.015s 00:05:22.905 sys 0m0.029s 00:05:22.905 23:31:11 env.env_pci -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:22.905 23:31:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.905 ************************************ 00:05:22.905 END TEST env_pci 00:05:22.905 ************************************ 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1136 -- # return 0 00:05:22.905 23:31:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.905 23:31:11 env -- env/env.sh@15 -- # uname 00:05:22.905 23:31:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.905 23:31:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.905 23:31:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1093 -- # '[' 5 -le 1 ']' 00:05:22.905 23:31:11 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:22.905 23:31:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.905 ************************************ 00:05:22.905 START TEST env_dpdk_post_init 00:05:22.905 ************************************ 00:05:22.905 23:31:11 env.env_dpdk_post_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.905 EAL: Detected CPU lcores: 96 00:05:22.905 EAL: Detected NUMA nodes: 2 00:05:22.905 EAL: Detected shared linkage of DPDK 00:05:22.905 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:23.163 EAL: Selected IOVA mode 'VA' 00:05:23.163 EAL: VFIO support initialized 00:05:23.163 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:23.163 EAL: Using IOMMU type 1 (Type 1) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:23.163 EAL: Ignore mapping IO port bar(1) 00:05:23.163 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:24.098 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:24.098 EAL: Ignore mapping IO port bar(1) 00:05:24.098 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:28.269 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:28.269 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:28.269 Starting DPDK initialization... 00:05:28.269 Starting SPDK post initialization... 00:05:28.269 SPDK NVMe probe 00:05:28.269 Attaching to 0000:5f:00.0 00:05:28.269 Attached to 0000:5f:00.0 00:05:28.269 Cleaning up... 00:05:28.269 00:05:28.269 real 0m4.935s 00:05:28.269 user 0m3.843s 00:05:28.269 sys 0m0.163s 00:05:28.269 23:31:16 env.env_dpdk_post_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:28.269 23:31:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 ************************************ 00:05:28.269 END TEST env_dpdk_post_init 00:05:28.269 ************************************ 00:05:28.269 23:31:16 env -- common/autotest_common.sh@1136 -- # return 0 00:05:28.269 23:31:16 env -- env/env.sh@26 -- # uname 00:05:28.269 23:31:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:28.269 23:31:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.269 23:31:16 env -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:28.269 23:31:16 env -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:28.269 23:31:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 ************************************ 00:05:28.269 START TEST env_mem_callbacks 00:05:28.269 ************************************ 00:05:28.269 23:31:16 env.env_mem_callbacks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:28.269 EAL: Detected CPU lcores: 96 00:05:28.269 EAL: Detected NUMA nodes: 2 00:05:28.269 EAL: Detected shared linkage of DPDK 00:05:28.269 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:28.269 EAL: Selected IOVA mode 'VA' 00:05:28.269 EAL: VFIO support initialized 00:05:28.269 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:28.269 00:05:28.269 00:05:28.269 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.269 http://cunit.sourceforge.net/ 00:05:28.269 00:05:28.269 00:05:28.269 Suite: memory 00:05:28.269 Test: test ... 00:05:28.269 register 0x200000200000 2097152 00:05:28.269 malloc 3145728 00:05:28.269 register 0x200000400000 4194304 00:05:28.269 buf 0x200000500000 len 3145728 PASSED 00:05:28.269 malloc 64 00:05:28.269 buf 0x2000004fff40 len 64 PASSED 00:05:28.269 malloc 4194304 00:05:28.269 register 0x200000800000 6291456 00:05:28.269 buf 0x200000a00000 len 4194304 PASSED 00:05:28.269 free 0x200000500000 3145728 00:05:28.269 free 0x2000004fff40 64 00:05:28.269 unregister 0x200000400000 4194304 PASSED 00:05:28.269 free 0x200000a00000 4194304 00:05:28.269 unregister 0x200000800000 6291456 PASSED 00:05:28.269 malloc 8388608 00:05:28.269 register 0x200000400000 10485760 00:05:28.269 buf 0x200000600000 len 8388608 PASSED 00:05:28.269 free 0x200000600000 8388608 00:05:28.269 unregister 0x200000400000 10485760 PASSED 00:05:28.269 passed 00:05:28.269 00:05:28.269 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.269 suites 1 1 n/a 0 0 00:05:28.269 tests 1 1 1 0 0 00:05:28.269 asserts 15 15 15 0 n/a 00:05:28.269 00:05:28.269 Elapsed time = 0.005 seconds 00:05:28.269 00:05:28.269 real 0m0.051s 00:05:28.269 user 0m0.014s 00:05:28.269 sys 0m0.036s 00:05:28.269 23:31:16 env.env_mem_callbacks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:28.269 23:31:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 ************************************ 00:05:28.269 END TEST env_mem_callbacks 00:05:28.269 ************************************ 00:05:28.269 23:31:16 env -- common/autotest_common.sh@1136 -- # return 0 00:05:28.269 00:05:28.269 real 0m6.650s 00:05:28.269 user 0m4.784s 00:05:28.269 sys 0m0.941s 00:05:28.269 23:31:16 env -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:28.269 23:31:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 ************************************ 00:05:28.269 END TEST env 00:05:28.269 ************************************ 00:05:28.269 23:31:16 -- common/autotest_common.sh@1136 -- # return 0 00:05:28.269 23:31:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.269 23:31:16 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:28.269 23:31:16 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:28.269 23:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 ************************************ 00:05:28.269 START TEST rpc 00:05:28.269 ************************************ 00:05:28.269 23:31:16 rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:28.269 * Looking for test storage... 00:05:28.269 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:28.269 23:31:17 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1288963 00:05:28.269 23:31:17 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.269 23:31:17 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:28.269 23:31:17 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1288963 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@823 -- # '[' -z 1288963 ']' 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:28.269 23:31:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.269 [2024-07-15 23:31:17.111743] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:28.269 [2024-07-15 23:31:17.111794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288963 ] 00:05:28.269 [2024-07-15 23:31:17.167374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.269 [2024-07-15 23:31:17.249654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:28.269 [2024-07-15 23:31:17.249687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1288963' to capture a snapshot of events at runtime. 00:05:28.269 [2024-07-15 23:31:17.249694] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:28.269 [2024-07-15 23:31:17.249701] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:28.269 [2024-07-15 23:31:17.249706] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1288963 for offline analysis/debug. 00:05:28.269 [2024-07-15 23:31:17.249724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.200 23:31:17 rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:29.200 23:31:17 rpc -- common/autotest_common.sh@856 -- # return 0 00:05:29.200 23:31:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:29.200 23:31:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:29.200 23:31:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:29.200 23:31:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:29.200 23:31:17 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:29.200 23:31:17 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:29.200 23:31:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 ************************************ 00:05:29.200 START TEST rpc_integrity 00:05:29.200 ************************************ 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 23:31:17 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.200 { 00:05:29.200 "name": "Malloc0", 00:05:29.200 "aliases": [ 00:05:29.200 "b3976a25-4826-4972-998c-784db22a85f9" 00:05:29.200 ], 00:05:29.200 "product_name": "Malloc disk", 00:05:29.200 "block_size": 512, 00:05:29.200 "num_blocks": 16384, 00:05:29.200 "uuid": "b3976a25-4826-4972-998c-784db22a85f9", 00:05:29.200 "assigned_rate_limits": { 00:05:29.200 "rw_ios_per_sec": 0, 00:05:29.200 "rw_mbytes_per_sec": 0, 00:05:29.200 "r_mbytes_per_sec": 0, 00:05:29.200 "w_mbytes_per_sec": 0 00:05:29.200 }, 00:05:29.200 "claimed": false, 00:05:29.200 "zoned": false, 00:05:29.200 "supported_io_types": { 00:05:29.200 "read": true, 00:05:29.200 "write": true, 00:05:29.200 "unmap": true, 00:05:29.200 "flush": true, 00:05:29.200 "reset": true, 00:05:29.200 "nvme_admin": false, 00:05:29.200 "nvme_io": false, 00:05:29.200 "nvme_io_md": false, 00:05:29.200 "write_zeroes": true, 00:05:29.200 "zcopy": true, 00:05:29.200 "get_zone_info": false, 00:05:29.200 "zone_management": false, 00:05:29.200 "zone_append": false, 00:05:29.200 "compare": false, 00:05:29.200 "compare_and_write": false, 00:05:29.200 "abort": true, 00:05:29.200 "seek_hole": false, 00:05:29.200 "seek_data": false, 00:05:29.200 "copy": true, 00:05:29.200 "nvme_iov_md": false 00:05:29.200 }, 00:05:29.200 "memory_domains": [ 00:05:29.200 { 00:05:29.200 "dma_device_id": "system", 00:05:29.200 "dma_device_type": 1 00:05:29.200 }, 00:05:29.200 { 00:05:29.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.200 "dma_device_type": 2 00:05:29.200 } 00:05:29.200 ], 00:05:29.200 "driver_specific": {} 00:05:29.200 } 00:05:29.200 ]' 00:05:29.200 23:31:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.200 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.200 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 [2024-07-15 23:31:18.046991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:29.200 [2024-07-15 23:31:18.047019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.200 [2024-07-15 23:31:18.047031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f20d0 00:05:29.200 [2024-07-15 23:31:18.047037] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.200 [2024-07-15 23:31:18.048058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.200 [2024-07-15 23:31:18.048078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.200 Passthru0 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.200 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.200 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.200 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.200 { 00:05:29.200 "name": "Malloc0", 00:05:29.200 "aliases": [ 00:05:29.200 "b3976a25-4826-4972-998c-784db22a85f9" 00:05:29.200 ], 00:05:29.200 "product_name": "Malloc disk", 00:05:29.200 "block_size": 512, 00:05:29.200 "num_blocks": 16384, 00:05:29.200 "uuid": "b3976a25-4826-4972-998c-784db22a85f9", 00:05:29.200 "assigned_rate_limits": { 00:05:29.200 "rw_ios_per_sec": 0, 00:05:29.200 "rw_mbytes_per_sec": 0, 00:05:29.200 "r_mbytes_per_sec": 0, 00:05:29.200 "w_mbytes_per_sec": 0 00:05:29.200 }, 00:05:29.200 "claimed": true, 00:05:29.200 "claim_type": "exclusive_write", 00:05:29.200 "zoned": false, 00:05:29.200 "supported_io_types": { 00:05:29.200 "read": true, 00:05:29.200 "write": true, 00:05:29.200 "unmap": true, 00:05:29.200 "flush": true, 00:05:29.200 "reset": true, 00:05:29.200 "nvme_admin": false, 00:05:29.200 "nvme_io": false, 00:05:29.200 "nvme_io_md": false, 00:05:29.200 "write_zeroes": true, 00:05:29.200 "zcopy": true, 00:05:29.200 "get_zone_info": false, 00:05:29.200 "zone_management": false, 00:05:29.200 "zone_append": false, 00:05:29.200 "compare": false, 00:05:29.200 "compare_and_write": false, 00:05:29.200 "abort": true, 00:05:29.200 "seek_hole": false, 00:05:29.200 "seek_data": false, 00:05:29.200 "copy": true, 00:05:29.200 "nvme_iov_md": false 00:05:29.200 }, 00:05:29.200 "memory_domains": [ 00:05:29.200 { 00:05:29.200 "dma_device_id": "system", 00:05:29.200 "dma_device_type": 1 00:05:29.200 }, 00:05:29.200 { 00:05:29.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.200 "dma_device_type": 2 00:05:29.200 } 00:05:29.200 ], 00:05:29.200 "driver_specific": {} 00:05:29.200 }, 00:05:29.200 { 00:05:29.200 "name": "Passthru0", 00:05:29.201 "aliases": [ 00:05:29.201 "71e275b1-018d-5b5b-9559-527f0c2f55bc" 00:05:29.201 ], 00:05:29.201 "product_name": "passthru", 00:05:29.201 "block_size": 512, 00:05:29.201 "num_blocks": 16384, 00:05:29.201 "uuid": "71e275b1-018d-5b5b-9559-527f0c2f55bc", 00:05:29.201 "assigned_rate_limits": { 00:05:29.201 "rw_ios_per_sec": 0, 00:05:29.201 "rw_mbytes_per_sec": 0, 00:05:29.201 "r_mbytes_per_sec": 0, 00:05:29.201 "w_mbytes_per_sec": 0 00:05:29.201 }, 00:05:29.201 "claimed": false, 00:05:29.201 "zoned": false, 00:05:29.201 "supported_io_types": { 00:05:29.201 "read": true, 00:05:29.201 "write": true, 00:05:29.201 "unmap": true, 00:05:29.201 "flush": true, 00:05:29.201 "reset": true, 00:05:29.201 "nvme_admin": false, 00:05:29.201 "nvme_io": false, 00:05:29.201 "nvme_io_md": false, 00:05:29.201 "write_zeroes": true, 00:05:29.201 "zcopy": true, 00:05:29.201 "get_zone_info": false, 00:05:29.201 "zone_management": false, 00:05:29.201 "zone_append": false, 00:05:29.201 "compare": false, 00:05:29.201 "compare_and_write": false, 00:05:29.201 "abort": true, 00:05:29.201 "seek_hole": false, 00:05:29.201 "seek_data": false, 00:05:29.201 "copy": true, 00:05:29.201 "nvme_iov_md": false 00:05:29.201 }, 00:05:29.201 "memory_domains": [ 00:05:29.201 { 00:05:29.201 "dma_device_id": "system", 00:05:29.201 "dma_device_type": 1 00:05:29.201 }, 00:05:29.201 { 00:05:29.201 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.201 "dma_device_type": 2 00:05:29.201 } 00:05:29.201 ], 00:05:29.201 "driver_specific": { 00:05:29.201 "passthru": { 00:05:29.201 "name": "Passthru0", 00:05:29.201 "base_bdev_name": "Malloc0" 00:05:29.201 } 00:05:29.201 } 00:05:29.201 } 00:05:29.201 ]' 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.201 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:29.201 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:29.458 23:31:18 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:29.458 00:05:29.458 real 0m0.265s 00:05:29.458 user 0m0.161s 00:05:29.458 sys 0m0.038s 00:05:29.458 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 ************************************ 00:05:29.458 END TEST rpc_integrity 00:05:29.458 ************************************ 00:05:29.458 23:31:18 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:29.458 23:31:18 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:29.458 23:31:18 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:29.458 23:31:18 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:29.458 23:31:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 ************************************ 00:05:29.458 START TEST rpc_plugins 00:05:29.458 ************************************ 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@1117 -- # rpc_plugins 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:29.458 { 00:05:29.458 "name": "Malloc1", 00:05:29.458 "aliases": [ 00:05:29.458 "7555a547-5dc5-48ab-a942-1634956eda3f" 00:05:29.458 ], 00:05:29.458 "product_name": "Malloc disk", 00:05:29.458 "block_size": 4096, 00:05:29.458 "num_blocks": 256, 00:05:29.458 "uuid": "7555a547-5dc5-48ab-a942-1634956eda3f", 00:05:29.458 "assigned_rate_limits": { 00:05:29.458 "rw_ios_per_sec": 0, 00:05:29.458 "rw_mbytes_per_sec": 0, 00:05:29.458 "r_mbytes_per_sec": 0, 00:05:29.458 "w_mbytes_per_sec": 0 00:05:29.458 }, 00:05:29.458 "claimed": false, 00:05:29.458 "zoned": false, 00:05:29.458 "supported_io_types": { 00:05:29.458 "read": true, 00:05:29.458 "write": true, 00:05:29.458 "unmap": true, 00:05:29.458 "flush": true, 00:05:29.458 "reset": true, 00:05:29.458 "nvme_admin": false, 00:05:29.458 "nvme_io": false, 00:05:29.458 "nvme_io_md": false, 00:05:29.458 "write_zeroes": true, 00:05:29.458 "zcopy": true, 00:05:29.458 "get_zone_info": false, 00:05:29.458 "zone_management": false, 00:05:29.458 "zone_append": false, 00:05:29.458 "compare": false, 00:05:29.458 "compare_and_write": false, 00:05:29.458 "abort": true, 00:05:29.458 "seek_hole": false, 00:05:29.458 "seek_data": false, 00:05:29.458 "copy": true, 00:05:29.458 "nvme_iov_md": false 00:05:29.458 }, 00:05:29.458 "memory_domains": [ 00:05:29.458 { 00:05:29.458 "dma_device_id": "system", 00:05:29.458 "dma_device_type": 1 00:05:29.458 }, 00:05:29.458 { 00:05:29.458 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.458 "dma_device_type": 2 00:05:29.458 } 00:05:29.458 ], 00:05:29.458 "driver_specific": {} 00:05:29.458 } 00:05:29.458 ]' 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:29.458 23:31:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:29.458 00:05:29.458 real 0m0.134s 00:05:29.458 user 0m0.083s 00:05:29.458 sys 0m0.017s 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:29.458 23:31:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:29.458 ************************************ 00:05:29.458 END TEST rpc_plugins 00:05:29.458 ************************************ 00:05:29.458 23:31:18 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:29.459 23:31:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:29.459 23:31:18 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:29.459 23:31:18 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:29.459 23:31:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.716 ************************************ 00:05:29.716 START TEST rpc_trace_cmd_test 00:05:29.716 ************************************ 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1117 -- # rpc_trace_cmd_test 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:29.716 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1288963", 00:05:29.716 "tpoint_group_mask": "0x8", 00:05:29.716 "iscsi_conn": { 00:05:29.716 "mask": "0x2", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "scsi": { 00:05:29.716 "mask": "0x4", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "bdev": { 00:05:29.716 "mask": "0x8", 00:05:29.716 "tpoint_mask": "0xffffffffffffffff" 00:05:29.716 }, 00:05:29.716 "nvmf_rdma": { 00:05:29.716 "mask": "0x10", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "nvmf_tcp": { 00:05:29.716 "mask": "0x20", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "ftl": { 00:05:29.716 "mask": "0x40", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "blobfs": { 00:05:29.716 "mask": "0x80", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "dsa": { 00:05:29.716 "mask": "0x200", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "thread": { 00:05:29.716 "mask": "0x400", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "nvme_pcie": { 00:05:29.716 "mask": "0x800", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "iaa": { 00:05:29.716 "mask": "0x1000", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "nvme_tcp": { 00:05:29.716 "mask": "0x2000", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "bdev_nvme": { 00:05:29.716 "mask": "0x4000", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 }, 00:05:29.716 "sock": { 00:05:29.716 "mask": "0x8000", 00:05:29.716 "tpoint_mask": "0x0" 00:05:29.716 } 00:05:29.716 }' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:29.716 00:05:29.716 real 0m0.207s 00:05:29.716 user 0m0.171s 00:05:29.716 sys 0m0.028s 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:29.716 23:31:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:29.716 ************************************ 00:05:29.716 END TEST rpc_trace_cmd_test 00:05:29.716 ************************************ 00:05:29.716 23:31:18 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:29.716 23:31:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:29.716 23:31:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:29.716 23:31:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:29.716 23:31:18 rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:29.716 23:31:18 rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:29.716 23:31:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 ************************************ 00:05:29.974 START TEST rpc_daemon_integrity 00:05:29.974 ************************************ 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1117 -- # rpc_integrity 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:29.974 { 00:05:29.974 "name": "Malloc2", 00:05:29.974 "aliases": [ 00:05:29.974 "95dc4d1e-6ddb-4d92-96ff-0c1597544cbf" 00:05:29.974 ], 00:05:29.974 "product_name": "Malloc disk", 00:05:29.974 "block_size": 512, 00:05:29.974 "num_blocks": 16384, 00:05:29.974 "uuid": "95dc4d1e-6ddb-4d92-96ff-0c1597544cbf", 00:05:29.974 "assigned_rate_limits": { 00:05:29.974 "rw_ios_per_sec": 0, 00:05:29.974 "rw_mbytes_per_sec": 0, 00:05:29.974 "r_mbytes_per_sec": 0, 00:05:29.974 "w_mbytes_per_sec": 0 00:05:29.974 }, 00:05:29.974 "claimed": false, 00:05:29.974 "zoned": false, 00:05:29.974 "supported_io_types": { 00:05:29.974 "read": true, 00:05:29.974 "write": true, 00:05:29.974 "unmap": true, 00:05:29.974 "flush": true, 00:05:29.974 "reset": true, 00:05:29.974 "nvme_admin": false, 00:05:29.974 "nvme_io": false, 00:05:29.974 "nvme_io_md": false, 00:05:29.974 "write_zeroes": true, 00:05:29.974 "zcopy": true, 00:05:29.974 "get_zone_info": false, 00:05:29.974 "zone_management": false, 00:05:29.974 "zone_append": false, 00:05:29.974 "compare": false, 00:05:29.974 "compare_and_write": false, 00:05:29.974 "abort": true, 00:05:29.974 "seek_hole": false, 00:05:29.974 "seek_data": false, 00:05:29.974 "copy": true, 00:05:29.974 "nvme_iov_md": false 00:05:29.974 }, 00:05:29.974 "memory_domains": [ 00:05:29.974 { 00:05:29.974 "dma_device_id": "system", 00:05:29.974 "dma_device_type": 1 00:05:29.974 }, 00:05:29.974 { 00:05:29.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.974 "dma_device_type": 2 00:05:29.974 } 00:05:29.974 ], 00:05:29.974 "driver_specific": {} 00:05:29.974 } 00:05:29.974 ]' 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 [2024-07-15 23:31:18.853193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:29.974 [2024-07-15 23:31:18.853220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:29.974 [2024-07-15 23:31:18.853231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12f26b0 00:05:29.974 [2024-07-15 23:31:18.853237] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:29.974 [2024-07-15 23:31:18.854170] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:29.974 [2024-07-15 23:31:18.854191] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:29.974 Passthru0 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.974 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:29.974 { 00:05:29.974 "name": "Malloc2", 00:05:29.974 "aliases": [ 00:05:29.974 "95dc4d1e-6ddb-4d92-96ff-0c1597544cbf" 00:05:29.974 ], 00:05:29.974 "product_name": "Malloc disk", 00:05:29.974 "block_size": 512, 00:05:29.974 "num_blocks": 16384, 00:05:29.974 "uuid": "95dc4d1e-6ddb-4d92-96ff-0c1597544cbf", 00:05:29.974 "assigned_rate_limits": { 00:05:29.974 "rw_ios_per_sec": 0, 00:05:29.974 "rw_mbytes_per_sec": 0, 00:05:29.974 "r_mbytes_per_sec": 0, 00:05:29.974 "w_mbytes_per_sec": 0 00:05:29.974 }, 00:05:29.974 "claimed": true, 00:05:29.974 "claim_type": "exclusive_write", 00:05:29.974 "zoned": false, 00:05:29.974 "supported_io_types": { 00:05:29.974 "read": true, 00:05:29.974 "write": true, 00:05:29.974 "unmap": true, 00:05:29.974 "flush": true, 00:05:29.974 "reset": true, 00:05:29.975 "nvme_admin": false, 00:05:29.975 "nvme_io": false, 00:05:29.975 "nvme_io_md": false, 00:05:29.975 "write_zeroes": true, 00:05:29.975 "zcopy": true, 00:05:29.975 "get_zone_info": false, 00:05:29.975 "zone_management": false, 00:05:29.975 "zone_append": false, 00:05:29.975 "compare": false, 00:05:29.975 "compare_and_write": false, 00:05:29.975 "abort": true, 00:05:29.975 "seek_hole": false, 00:05:29.975 "seek_data": false, 00:05:29.975 "copy": true, 00:05:29.975 "nvme_iov_md": false 00:05:29.975 }, 00:05:29.975 "memory_domains": [ 00:05:29.975 { 00:05:29.975 "dma_device_id": "system", 00:05:29.975 "dma_device_type": 1 00:05:29.975 }, 00:05:29.975 { 00:05:29.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.975 "dma_device_type": 2 00:05:29.975 } 00:05:29.975 ], 00:05:29.975 "driver_specific": {} 00:05:29.975 }, 00:05:29.975 { 00:05:29.975 "name": "Passthru0", 00:05:29.975 "aliases": [ 00:05:29.975 "3b199624-401a-51e2-bdd2-72c8449de07f" 00:05:29.975 ], 00:05:29.975 "product_name": "passthru", 00:05:29.975 "block_size": 512, 00:05:29.975 "num_blocks": 16384, 00:05:29.975 "uuid": "3b199624-401a-51e2-bdd2-72c8449de07f", 00:05:29.975 "assigned_rate_limits": { 00:05:29.975 "rw_ios_per_sec": 0, 00:05:29.975 "rw_mbytes_per_sec": 0, 00:05:29.975 "r_mbytes_per_sec": 0, 00:05:29.975 "w_mbytes_per_sec": 0 00:05:29.975 }, 00:05:29.975 "claimed": false, 00:05:29.975 "zoned": false, 00:05:29.975 "supported_io_types": { 00:05:29.975 "read": true, 00:05:29.975 "write": true, 00:05:29.975 "unmap": true, 00:05:29.975 "flush": true, 00:05:29.975 "reset": true, 00:05:29.975 "nvme_admin": false, 00:05:29.975 "nvme_io": false, 00:05:29.975 "nvme_io_md": false, 00:05:29.975 "write_zeroes": true, 00:05:29.975 "zcopy": true, 00:05:29.975 "get_zone_info": false, 00:05:29.975 "zone_management": false, 00:05:29.975 "zone_append": false, 00:05:29.975 "compare": false, 00:05:29.975 "compare_and_write": false, 00:05:29.975 "abort": true, 00:05:29.975 "seek_hole": false, 00:05:29.975 "seek_data": false, 00:05:29.975 "copy": true, 00:05:29.975 "nvme_iov_md": false 00:05:29.975 }, 00:05:29.975 "memory_domains": [ 00:05:29.975 { 00:05:29.975 "dma_device_id": "system", 00:05:29.975 "dma_device_type": 1 00:05:29.975 }, 00:05:29.975 { 00:05:29.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:29.975 "dma_device_type": 2 00:05:29.975 } 00:05:29.975 ], 00:05:29.975 "driver_specific": { 00:05:29.975 "passthru": { 00:05:29.975 "name": "Passthru0", 00:05:29.975 "base_bdev_name": "Malloc2" 00:05:29.975 } 00:05:29.975 } 00:05:29.975 } 00:05:29.975 ]' 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:29.975 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:30.232 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:30.232 23:31:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:30.232 00:05:30.232 real 0m0.275s 00:05:30.232 user 0m0.178s 00:05:30.232 sys 0m0.039s 00:05:30.232 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.232 23:31:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:30.232 ************************************ 00:05:30.232 END TEST rpc_daemon_integrity 00:05:30.232 ************************************ 00:05:30.232 23:31:19 rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:30.232 23:31:19 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:30.232 23:31:19 rpc -- rpc/rpc.sh@84 -- # killprocess 1288963 00:05:30.232 23:31:19 rpc -- common/autotest_common.sh@942 -- # '[' -z 1288963 ']' 00:05:30.232 23:31:19 rpc -- common/autotest_common.sh@946 -- # kill -0 1288963 00:05:30.232 23:31:19 rpc -- common/autotest_common.sh@947 -- # uname 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1288963 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1288963' 00:05:30.233 killing process with pid 1288963 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@961 -- # kill 1288963 00:05:30.233 23:31:19 rpc -- common/autotest_common.sh@966 -- # wait 1288963 00:05:30.490 00:05:30.490 real 0m2.400s 00:05:30.490 user 0m3.093s 00:05:30.490 sys 0m0.642s 00:05:30.490 23:31:19 rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:30.490 23:31:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.490 ************************************ 00:05:30.490 END TEST rpc 00:05:30.490 ************************************ 00:05:30.490 23:31:19 -- common/autotest_common.sh@1136 -- # return 0 00:05:30.490 23:31:19 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.490 23:31:19 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:30.490 23:31:19 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.490 23:31:19 -- common/autotest_common.sh@10 -- # set +x 00:05:30.490 ************************************ 00:05:30.490 START TEST skip_rpc 00:05:30.490 ************************************ 00:05:30.490 23:31:19 skip_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:30.748 * Looking for test storage... 00:05:30.748 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:30.748 23:31:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:30.748 23:31:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:30.748 23:31:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:30.748 23:31:19 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:30.748 23:31:19 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:30.748 23:31:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.748 ************************************ 00:05:30.748 START TEST skip_rpc 00:05:30.748 ************************************ 00:05:30.748 23:31:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1117 -- # test_skip_rpc 00:05:30.748 23:31:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1289592 00:05:30.748 23:31:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.748 23:31:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:30.748 23:31:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:30.748 [2024-07-15 23:31:19.599571] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:30.748 [2024-07-15 23:31:19.599610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1289592 ] 00:05:30.748 [2024-07-15 23:31:19.653264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.748 [2024-07-15 23:31:19.726164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # local es=0 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # rpc_cmd spdk_get_version 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@645 -- # es=1 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1289592 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@942 -- # '[' -z 1289592 ']' 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # kill -0 1289592 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # uname 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1289592 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1289592' 00:05:36.000 killing process with pid 1289592 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@961 -- # kill 1289592 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # wait 1289592 00:05:36.000 00:05:36.000 real 0m5.365s 00:05:36.000 user 0m5.141s 00:05:36.000 sys 0m0.255s 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:36.000 23:31:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.000 ************************************ 00:05:36.000 END TEST skip_rpc 00:05:36.000 ************************************ 00:05:36.000 23:31:24 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:36.000 23:31:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:36.000 23:31:24 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:36.000 23:31:24 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:36.000 23:31:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.258 ************************************ 00:05:36.258 START TEST skip_rpc_with_json 00:05:36.258 ************************************ 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_json 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1290533 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1290533 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@823 -- # '[' -z 1290533 ']' 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:36.258 23:31:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:36.258 [2024-07-15 23:31:25.035215] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:36.258 [2024-07-15 23:31:25.035257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290533 ] 00:05:36.258 [2024-07-15 23:31:25.089363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.258 [2024-07-15 23:31:25.160771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # return 0 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.228 [2024-07-15 23:31:25.837534] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:37.228 request: 00:05:37.228 { 00:05:37.228 "trtype": "tcp", 00:05:37.228 "method": "nvmf_get_transports", 00:05:37.228 "req_id": 1 00:05:37.228 } 00:05:37.228 Got JSON-RPC error response 00:05:37.228 response: 00:05:37.228 { 00:05:37.228 "code": -19, 00:05:37.228 "message": "No such device" 00:05:37.228 } 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.228 [2024-07-15 23:31:25.849641] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@553 -- # xtrace_disable 00:05:37.228 23:31:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:37.228 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:05:37.228 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:37.228 { 00:05:37.228 "subsystems": [ 00:05:37.228 { 00:05:37.228 "subsystem": "keyring", 00:05:37.228 "config": [] 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "subsystem": "iobuf", 00:05:37.228 "config": [ 00:05:37.228 { 00:05:37.228 "method": "iobuf_set_options", 00:05:37.228 "params": { 00:05:37.228 "small_pool_count": 8192, 00:05:37.228 "large_pool_count": 1024, 00:05:37.228 "small_bufsize": 8192, 00:05:37.228 "large_bufsize": 135168 00:05:37.228 } 00:05:37.228 } 00:05:37.228 ] 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "subsystem": "sock", 00:05:37.228 "config": [ 00:05:37.228 { 00:05:37.228 "method": "sock_set_default_impl", 00:05:37.228 "params": { 00:05:37.228 "impl_name": "posix" 00:05:37.228 } 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "method": "sock_impl_set_options", 00:05:37.228 "params": { 00:05:37.228 "impl_name": "ssl", 00:05:37.228 "recv_buf_size": 4096, 00:05:37.228 "send_buf_size": 4096, 00:05:37.228 "enable_recv_pipe": true, 00:05:37.228 "enable_quickack": false, 00:05:37.228 "enable_placement_id": 0, 00:05:37.228 "enable_zerocopy_send_server": true, 00:05:37.228 "enable_zerocopy_send_client": false, 00:05:37.228 "zerocopy_threshold": 0, 00:05:37.228 "tls_version": 0, 00:05:37.228 "enable_ktls": false 00:05:37.228 } 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "method": "sock_impl_set_options", 00:05:37.228 "params": { 00:05:37.228 "impl_name": "posix", 00:05:37.228 "recv_buf_size": 2097152, 00:05:37.228 "send_buf_size": 2097152, 00:05:37.228 "enable_recv_pipe": true, 00:05:37.228 "enable_quickack": false, 00:05:37.228 "enable_placement_id": 0, 00:05:37.228 "enable_zerocopy_send_server": true, 00:05:37.228 "enable_zerocopy_send_client": false, 00:05:37.228 "zerocopy_threshold": 0, 00:05:37.228 "tls_version": 0, 00:05:37.228 "enable_ktls": false 00:05:37.228 } 00:05:37.228 } 00:05:37.228 ] 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "subsystem": "vmd", 00:05:37.228 "config": [] 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "subsystem": "accel", 00:05:37.228 "config": [ 00:05:37.228 { 00:05:37.228 "method": "accel_set_options", 00:05:37.228 "params": { 00:05:37.228 "small_cache_size": 128, 00:05:37.228 "large_cache_size": 16, 00:05:37.228 "task_count": 2048, 00:05:37.228 "sequence_count": 2048, 00:05:37.228 "buf_count": 2048 00:05:37.228 } 00:05:37.228 } 00:05:37.228 ] 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "subsystem": "bdev", 00:05:37.228 "config": [ 00:05:37.228 { 00:05:37.228 "method": "bdev_set_options", 00:05:37.228 "params": { 00:05:37.228 "bdev_io_pool_size": 65535, 00:05:37.228 "bdev_io_cache_size": 256, 00:05:37.228 "bdev_auto_examine": true, 00:05:37.228 "iobuf_small_cache_size": 128, 00:05:37.228 "iobuf_large_cache_size": 16 00:05:37.228 } 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "method": "bdev_raid_set_options", 00:05:37.228 "params": { 00:05:37.228 "process_window_size_kb": 1024 00:05:37.228 } 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "method": "bdev_iscsi_set_options", 00:05:37.228 "params": { 00:05:37.228 "timeout_sec": 30 00:05:37.228 } 00:05:37.228 }, 00:05:37.228 { 00:05:37.228 "method": "bdev_nvme_set_options", 00:05:37.228 "params": { 00:05:37.228 "action_on_timeout": "none", 00:05:37.228 "timeout_us": 0, 00:05:37.228 "timeout_admin_us": 0, 00:05:37.228 "keep_alive_timeout_ms": 10000, 00:05:37.228 "arbitration_burst": 0, 00:05:37.228 "low_priority_weight": 0, 00:05:37.228 "medium_priority_weight": 0, 00:05:37.228 "high_priority_weight": 0, 00:05:37.228 "nvme_adminq_poll_period_us": 10000, 00:05:37.228 "nvme_ioq_poll_period_us": 0, 00:05:37.228 "io_queue_requests": 0, 00:05:37.228 "delay_cmd_submit": true, 00:05:37.228 "transport_retry_count": 4, 00:05:37.228 "bdev_retry_count": 3, 00:05:37.228 "transport_ack_timeout": 0, 00:05:37.228 "ctrlr_loss_timeout_sec": 0, 00:05:37.228 "reconnect_delay_sec": 0, 00:05:37.228 "fast_io_fail_timeout_sec": 0, 00:05:37.228 "disable_auto_failback": false, 00:05:37.228 "generate_uuids": false, 00:05:37.228 "transport_tos": 0, 00:05:37.228 "nvme_error_stat": false, 00:05:37.228 "rdma_srq_size": 0, 00:05:37.228 "io_path_stat": false, 00:05:37.228 "allow_accel_sequence": false, 00:05:37.228 "rdma_max_cq_size": 0, 00:05:37.228 "rdma_cm_event_timeout_ms": 0, 00:05:37.228 "dhchap_digests": [ 00:05:37.229 "sha256", 00:05:37.229 "sha384", 00:05:37.229 "sha512" 00:05:37.229 ], 00:05:37.229 "dhchap_dhgroups": [ 00:05:37.229 "null", 00:05:37.229 "ffdhe2048", 00:05:37.229 "ffdhe3072", 00:05:37.229 "ffdhe4096", 00:05:37.229 "ffdhe6144", 00:05:37.229 "ffdhe8192" 00:05:37.229 ] 00:05:37.229 } 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "method": "bdev_nvme_set_hotplug", 00:05:37.229 "params": { 00:05:37.229 "period_us": 100000, 00:05:37.229 "enable": false 00:05:37.229 } 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "method": "bdev_wait_for_examine" 00:05:37.229 } 00:05:37.229 ] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "scsi", 00:05:37.229 "config": null 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "scheduler", 00:05:37.229 "config": [ 00:05:37.229 { 00:05:37.229 "method": "framework_set_scheduler", 00:05:37.229 "params": { 00:05:37.229 "name": "static" 00:05:37.229 } 00:05:37.229 } 00:05:37.229 ] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "vhost_scsi", 00:05:37.229 "config": [] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "vhost_blk", 00:05:37.229 "config": [] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "ublk", 00:05:37.229 "config": [] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "nbd", 00:05:37.229 "config": [] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "nvmf", 00:05:37.229 "config": [ 00:05:37.229 { 00:05:37.229 "method": "nvmf_set_config", 00:05:37.229 "params": { 00:05:37.229 "discovery_filter": "match_any", 00:05:37.229 "admin_cmd_passthru": { 00:05:37.229 "identify_ctrlr": false 00:05:37.229 } 00:05:37.229 } 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "method": "nvmf_set_max_subsystems", 00:05:37.229 "params": { 00:05:37.229 "max_subsystems": 1024 00:05:37.229 } 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "method": "nvmf_set_crdt", 00:05:37.229 "params": { 00:05:37.229 "crdt1": 0, 00:05:37.229 "crdt2": 0, 00:05:37.229 "crdt3": 0 00:05:37.229 } 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "method": "nvmf_create_transport", 00:05:37.229 "params": { 00:05:37.229 "trtype": "TCP", 00:05:37.229 "max_queue_depth": 128, 00:05:37.229 "max_io_qpairs_per_ctrlr": 127, 00:05:37.229 "in_capsule_data_size": 4096, 00:05:37.229 "max_io_size": 131072, 00:05:37.229 "io_unit_size": 131072, 00:05:37.229 "max_aq_depth": 128, 00:05:37.229 "num_shared_buffers": 511, 00:05:37.229 "buf_cache_size": 4294967295, 00:05:37.229 "dif_insert_or_strip": false, 00:05:37.229 "zcopy": false, 00:05:37.229 "c2h_success": true, 00:05:37.229 "sock_priority": 0, 00:05:37.229 "abort_timeout_sec": 1, 00:05:37.229 "ack_timeout": 0, 00:05:37.229 "data_wr_pool_size": 0 00:05:37.229 } 00:05:37.229 } 00:05:37.229 ] 00:05:37.229 }, 00:05:37.229 { 00:05:37.229 "subsystem": "iscsi", 00:05:37.229 "config": [ 00:05:37.229 { 00:05:37.229 "method": "iscsi_set_options", 00:05:37.229 "params": { 00:05:37.229 "node_base": "iqn.2016-06.io.spdk", 00:05:37.229 "max_sessions": 128, 00:05:37.229 "max_connections_per_session": 2, 00:05:37.229 "max_queue_depth": 64, 00:05:37.229 "default_time2wait": 2, 00:05:37.229 "default_time2retain": 20, 00:05:37.229 "first_burst_length": 8192, 00:05:37.229 "immediate_data": true, 00:05:37.229 "allow_duplicated_isid": false, 00:05:37.229 "error_recovery_level": 0, 00:05:37.229 "nop_timeout": 60, 00:05:37.229 "nop_in_interval": 30, 00:05:37.229 "disable_chap": false, 00:05:37.229 "require_chap": false, 00:05:37.229 "mutual_chap": false, 00:05:37.229 "chap_group": 0, 00:05:37.229 "max_large_datain_per_connection": 64, 00:05:37.229 "max_r2t_per_connection": 4, 00:05:37.229 "pdu_pool_size": 36864, 00:05:37.229 "immediate_data_pool_size": 16384, 00:05:37.229 "data_out_pool_size": 2048 00:05:37.229 } 00:05:37.229 } 00:05:37.229 ] 00:05:37.229 } 00:05:37.229 ] 00:05:37.229 } 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1290533 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1290533 ']' 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1290533 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1290533 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1290533' 00:05:37.229 killing process with pid 1290533 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1290533 00:05:37.229 23:31:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1290533 00:05:37.532 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1290782 00:05:37.532 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:37.532 23:31:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@942 -- # '[' -z 1290782 ']' 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # kill -0 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # uname 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1290782' 00:05:42.828 killing process with pid 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@961 -- # kill 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # wait 1290782 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:42.828 00:05:42.828 real 0m6.734s 00:05:42.828 user 0m6.570s 00:05:42.828 sys 0m0.588s 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 END TEST skip_rpc_with_json 00:05:42.828 ************************************ 00:05:42.828 23:31:31 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:42.828 23:31:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:42.828 23:31:31 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:42.828 23:31:31 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:42.828 23:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.828 ************************************ 00:05:42.828 START TEST skip_rpc_with_delay 00:05:42.828 ************************************ 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1117 -- # test_skip_rpc_with_delay 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # local es=0 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:42.828 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:43.086 [2024-07-15 23:31:31.830228] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:43.086 [2024-07-15 23:31:31.830283] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@645 -- # es=1 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:43.086 00:05:43.086 real 0m0.061s 00:05:43.086 user 0m0.039s 00:05:43.086 sys 0m0.022s 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:43.086 23:31:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:43.086 ************************************ 00:05:43.086 END TEST skip_rpc_with_delay 00:05:43.086 ************************************ 00:05:43.086 23:31:31 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:43.086 23:31:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:43.086 23:31:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:43.086 23:31:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:43.086 23:31:31 skip_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:43.086 23:31:31 skip_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:43.086 23:31:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.086 ************************************ 00:05:43.086 START TEST exit_on_failed_rpc_init 00:05:43.086 ************************************ 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1117 -- # test_exit_on_failed_rpc_init 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1291750 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1291750 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@823 -- # '[' -z 1291750 ']' 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:43.086 23:31:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:43.086 [2024-07-15 23:31:31.952963] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:43.086 [2024-07-15 23:31:31.953002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291750 ] 00:05:43.086 [2024-07-15 23:31:32.006457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.344 [2024-07-15 23:31:32.087022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # return 0 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # local es=0 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:43.908 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:43.908 [2024-07-15 23:31:32.798757] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:43.908 [2024-07-15 23:31:32.798801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1291859 ] 00:05:43.908 [2024-07-15 23:31:32.852112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.166 [2024-07-15 23:31:32.924839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.166 [2024-07-15 23:31:32.924919] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:44.166 [2024-07-15 23:31:32.924928] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:44.166 [2024-07-15 23:31:32.924935] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@645 -- # es=234 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # es=106 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # case "$es" in 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=1 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1291750 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@942 -- # '[' -z 1291750 ']' 00:05:44.166 23:31:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # kill -0 1291750 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # uname 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1291750 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1291750' 00:05:44.166 killing process with pid 1291750 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@961 -- # kill 1291750 00:05:44.166 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # wait 1291750 00:05:44.423 00:05:44.423 real 0m1.441s 00:05:44.423 user 0m1.660s 00:05:44.423 sys 0m0.387s 00:05:44.423 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:44.423 23:31:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 ************************************ 00:05:44.423 END TEST exit_on_failed_rpc_init 00:05:44.423 ************************************ 00:05:44.423 23:31:33 skip_rpc -- common/autotest_common.sh@1136 -- # return 0 00:05:44.423 23:31:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:44.423 00:05:44.423 real 0m13.943s 00:05:44.423 user 0m13.549s 00:05:44.423 sys 0m1.480s 00:05:44.423 23:31:33 skip_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:44.423 23:31:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.423 ************************************ 00:05:44.423 END TEST skip_rpc 00:05:44.423 ************************************ 00:05:44.679 23:31:33 -- common/autotest_common.sh@1136 -- # return 0 00:05:44.679 23:31:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.679 23:31:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:44.679 23:31:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.679 23:31:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.679 ************************************ 00:05:44.679 START TEST rpc_client 00:05:44.679 ************************************ 00:05:44.679 23:31:33 rpc_client -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:44.679 * Looking for test storage... 00:05:44.679 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:44.679 23:31:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:44.679 OK 00:05:44.679 23:31:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:44.679 00:05:44.679 real 0m0.110s 00:05:44.679 user 0m0.057s 00:05:44.679 sys 0m0.060s 00:05:44.679 23:31:33 rpc_client -- common/autotest_common.sh@1118 -- # xtrace_disable 00:05:44.679 23:31:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:44.679 ************************************ 00:05:44.679 END TEST rpc_client 00:05:44.679 ************************************ 00:05:44.679 23:31:33 -- common/autotest_common.sh@1136 -- # return 0 00:05:44.680 23:31:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.680 23:31:33 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:05:44.680 23:31:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:05:44.680 23:31:33 -- common/autotest_common.sh@10 -- # set +x 00:05:44.680 ************************************ 00:05:44.680 START TEST json_config 00:05:44.680 ************************************ 00:05:44.680 23:31:33 json_config -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:44.938 23:31:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:44.938 23:31:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:44.938 23:31:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:44.938 23:31:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.938 23:31:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.938 23:31:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.938 23:31:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:44.938 23:31:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:44.938 23:31:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:44.938 INFO: JSON configuration test init 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.938 23:31:33 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:44.938 23:31:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:44.938 23:31:33 json_config -- json_config/common.sh@10 -- # shift 00:05:44.938 23:31:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:44.938 23:31:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:44.938 23:31:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:44.938 23:31:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.938 23:31:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:44.938 23:31:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1292110 00:05:44.938 23:31:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:44.938 Waiting for target to run... 00:05:44.938 23:31:33 json_config -- json_config/common.sh@25 -- # waitforlisten 1292110 /var/tmp/spdk_tgt.sock 00:05:44.938 23:31:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@823 -- # '[' -z 1292110 ']' 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:44.938 23:31:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.938 [2024-07-15 23:31:33.773443] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:44.939 [2024-07-15 23:31:33.773490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1292110 ] 00:05:45.504 [2024-07-15 23:31:34.205136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.504 [2024-07-15 23:31:34.292110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@856 -- # return 0 00:05:45.761 23:31:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:45.761 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:45.761 23:31:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:45.761 23:31:34 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:45.761 23:31:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:49.044 23:31:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@233 -- # [[ rdma == \r\d\m\a ]] 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@234 -- # TEST_TRANSPORT=rdma 00:05:49.044 23:31:37 json_config -- json_config/json_config.sh@234 -- # nvmftestinit 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:49.044 23:31:37 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:49.044 23:31:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:54.297 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:54.297 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:54.297 Found net devices under 0000:da:00.0: mlx_0_0 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:54.297 Found net devices under 0000:da:00.1: mlx_0_1 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@58 -- # uname 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:54.297 23:31:43 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:54.297 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:54.297 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:05:54.297 altname enp218s0f0np0 00:05:54.297 altname ens818f0np0 00:05:54.297 inet 192.168.100.8/24 scope global mlx_0_0 00:05:54.297 valid_lft forever preferred_lft forever 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:54.298 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:54.298 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:05:54.298 altname enp218s0f1np1 00:05:54.298 altname ens818f1np1 00:05:54.298 inet 192.168.100.9/24 scope global mlx_0_1 00:05:54.298 valid_lft forever preferred_lft forever 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@422 -- # return 0 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:54.298 192.168.100.9' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:54.298 192.168.100.9' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:54.298 192.168.100.9' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:54.298 23:31:43 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:54.298 23:31:43 json_config -- json_config/json_config.sh@237 -- # [[ -z 192.168.100.8 ]] 00:05:54.298 23:31:43 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.298 23:31:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.554 MallocForNvmf0 00:05:54.555 23:31:43 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.555 23:31:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.811 MallocForNvmf1 00:05:54.811 23:31:43 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:54.811 23:31:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:54.811 [2024-07-15 23:31:43.749983] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:54.811 [2024-07-15 23:31:43.780761] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x148cb90/0x15b9d00) succeed. 00:05:54.811 [2024-07-15 23:31:43.793227] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x148ed80/0x1499bc0) succeed. 00:05:55.083 23:31:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.083 23:31:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.083 23:31:44 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.083 23:31:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.341 23:31:44 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.341 23:31:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:55.598 23:31:44 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:55.598 23:31:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:55.598 [2024-07-15 23:31:44.507266] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:55.598 23:31:44 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:55.598 23:31:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.598 23:31:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.598 23:31:44 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:55.598 23:31:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.598 23:31:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.856 23:31:44 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:55.856 23:31:44 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.856 23:31:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:55.856 MallocBdevForConfigChangeCheck 00:05:55.856 23:31:44 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:55.856 23:31:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.856 23:31:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.856 23:31:44 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:55.856 23:31:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.114 23:31:45 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:56.114 INFO: shutting down applications... 00:05:56.114 23:31:45 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:56.114 23:31:45 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:56.114 23:31:45 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:56.114 23:31:45 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:58.642 Calling clear_iscsi_subsystem 00:05:58.643 Calling clear_nvmf_subsystem 00:05:58.643 Calling clear_nbd_subsystem 00:05:58.643 Calling clear_ublk_subsystem 00:05:58.643 Calling clear_vhost_blk_subsystem 00:05:58.643 Calling clear_vhost_scsi_subsystem 00:05:58.643 Calling clear_bdev_subsystem 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@345 -- # break 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:58.643 23:31:47 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:58.643 23:31:47 json_config -- json_config/common.sh@31 -- # local app=target 00:05:58.643 23:31:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:58.643 23:31:47 json_config -- json_config/common.sh@35 -- # [[ -n 1292110 ]] 00:05:58.643 23:31:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1292110 00:05:58.643 23:31:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:58.643 23:31:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:58.643 23:31:47 json_config -- json_config/common.sh@41 -- # kill -0 1292110 00:05:58.643 23:31:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.210 23:31:48 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.210 23:31:48 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.210 23:31:48 json_config -- json_config/common.sh@41 -- # kill -0 1292110 00:05:59.210 23:31:48 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.210 23:31:48 json_config -- json_config/common.sh@43 -- # break 00:05:59.210 23:31:48 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.210 23:31:48 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.210 SPDK target shutdown done 00:05:59.210 23:31:48 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:59.210 INFO: relaunching applications... 00:05:59.210 23:31:48 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.210 23:31:48 json_config -- json_config/common.sh@9 -- # local app=target 00:05:59.210 23:31:48 json_config -- json_config/common.sh@10 -- # shift 00:05:59.210 23:31:48 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.210 23:31:48 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.210 23:31:48 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.210 23:31:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.210 23:31:48 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.210 23:31:48 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1296800 00:05:59.210 23:31:48 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.210 Waiting for target to run... 00:05:59.210 23:31:48 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.210 23:31:48 json_config -- json_config/common.sh@25 -- # waitforlisten 1296800 /var/tmp/spdk_tgt.sock 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@823 -- # '[' -z 1296800 ']' 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@828 -- # local max_retries=100 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@832 -- # xtrace_disable 00:05:59.210 23:31:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.210 [2024-07-15 23:31:48.137955] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:05:59.210 [2024-07-15 23:31:48.138009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1296800 ] 00:05:59.777 [2024-07-15 23:31:48.568768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.777 [2024-07-15 23:31:48.659250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.059 [2024-07-15 23:31:51.698659] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x11109a0/0x113d280) succeed. 00:06:03.059 [2024-07-15 23:31:51.709452] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1112b90/0x119d260) succeed. 00:06:03.059 [2024-07-15 23:31:51.758595] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:03.317 23:31:52 json_config -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:03.317 23:31:52 json_config -- common/autotest_common.sh@856 -- # return 0 00:06:03.317 23:31:52 json_config -- json_config/common.sh@26 -- # echo '' 00:06:03.317 00:06:03.317 23:31:52 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:03.317 23:31:52 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:03.317 INFO: Checking if target configuration is the same... 00:06:03.574 23:31:52 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.574 23:31:52 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:03.574 23:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:03.574 + '[' 2 -ne 2 ']' 00:06:03.574 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:03.574 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:03.574 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:03.574 +++ basename /dev/fd/62 00:06:03.574 ++ mktemp /tmp/62.XXX 00:06:03.574 + tmp_file_1=/tmp/62.3fr 00:06:03.574 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.574 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:03.574 + tmp_file_2=/tmp/spdk_tgt_config.json.qTJ 00:06:03.574 + ret=0 00:06:03.574 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.833 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:03.833 + diff -u /tmp/62.3fr /tmp/spdk_tgt_config.json.qTJ 00:06:03.833 + echo 'INFO: JSON config files are the same' 00:06:03.833 INFO: JSON config files are the same 00:06:03.833 + rm /tmp/62.3fr /tmp/spdk_tgt_config.json.qTJ 00:06:03.833 + exit 0 00:06:03.833 23:31:52 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:03.833 23:31:52 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:03.833 INFO: changing configuration and checking if this can be detected... 00:06:03.833 23:31:52 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.833 23:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:03.833 23:31:52 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.833 23:31:52 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:03.833 23:31:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.091 + '[' 2 -ne 2 ']' 00:06:04.091 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:04.091 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:04.091 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:04.091 +++ basename /dev/fd/62 00:06:04.091 ++ mktemp /tmp/62.XXX 00:06:04.091 + tmp_file_1=/tmp/62.Jny 00:06:04.091 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.091 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.091 + tmp_file_2=/tmp/spdk_tgt_config.json.kvU 00:06:04.091 + ret=0 00:06:04.091 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.349 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.349 + diff -u /tmp/62.Jny /tmp/spdk_tgt_config.json.kvU 00:06:04.349 + ret=1 00:06:04.349 + echo '=== Start of file: /tmp/62.Jny ===' 00:06:04.349 + cat /tmp/62.Jny 00:06:04.349 + echo '=== End of file: /tmp/62.Jny ===' 00:06:04.349 + echo '' 00:06:04.349 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kvU ===' 00:06:04.349 + cat /tmp/spdk_tgt_config.json.kvU 00:06:04.349 + echo '=== End of file: /tmp/spdk_tgt_config.json.kvU ===' 00:06:04.349 + echo '' 00:06:04.349 + rm /tmp/62.Jny /tmp/spdk_tgt_config.json.kvU 00:06:04.349 + exit 1 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:04.349 INFO: configuration change detected. 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@317 -- # [[ -n 1296800 ]] 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:04.349 23:31:53 json_config -- json_config/json_config.sh@323 -- # killprocess 1296800 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@942 -- # '[' -z 1296800 ']' 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@946 -- # kill -0 1296800 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@947 -- # uname 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1296800 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1296800' 00:06:04.349 killing process with pid 1296800 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@961 -- # kill 1296800 00:06:04.349 23:31:53 json_config -- common/autotest_common.sh@966 -- # wait 1296800 00:06:06.879 23:31:55 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:06.879 23:31:55 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:06.879 23:31:55 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:06.879 23:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.879 23:31:55 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:06.879 23:31:55 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:06.879 INFO: Success 00:06:06.879 23:31:55 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@117 -- # sync 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:06.879 23:31:55 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:06.879 00:06:06.879 real 0m21.725s 00:06:06.879 user 0m23.805s 00:06:06.879 sys 0m6.027s 00:06:06.879 23:31:55 json_config -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:06.879 23:31:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.879 ************************************ 00:06:06.879 END TEST json_config 00:06:06.879 ************************************ 00:06:06.879 23:31:55 -- common/autotest_common.sh@1136 -- # return 0 00:06:06.879 23:31:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:06.879 23:31:55 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:06.879 23:31:55 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:06.879 23:31:55 -- common/autotest_common.sh@10 -- # set +x 00:06:06.879 ************************************ 00:06:06.879 START TEST json_config_extra_key 00:06:06.879 ************************************ 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:06.879 23:31:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.879 23:31:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.879 23:31:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.879 23:31:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.879 23:31:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.879 23:31:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.879 23:31:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:06.879 23:31:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.879 23:31:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:06.879 INFO: launching applications... 00:06:06.879 23:31:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1298105 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:06.879 Waiting for target to run... 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1298105 /var/tmp/spdk_tgt.sock 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@823 -- # '[' -z 1298105 ']' 00:06:06.879 23:31:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:06.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:06.879 23:31:55 json_config_extra_key -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:06.880 23:31:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:06.880 [2024-07-15 23:31:55.547036] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:06.880 [2024-07-15 23:31:55.547085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298105 ] 00:06:06.880 [2024-07-15 23:31:55.812363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.138 [2024-07-15 23:31:55.879662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.398 23:31:56 json_config_extra_key -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:07.398 23:31:56 json_config_extra_key -- common/autotest_common.sh@856 -- # return 0 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:07.398 00:06:07.398 23:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:07.398 INFO: shutting down applications... 00:06:07.398 23:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1298105 ]] 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1298105 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1298105 00:06:07.398 23:31:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1298105 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:07.966 23:31:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:07.966 SPDK target shutdown done 00:06:07.966 23:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:07.966 Success 00:06:07.966 00:06:07.966 real 0m1.428s 00:06:07.966 user 0m1.202s 00:06:07.966 sys 0m0.360s 00:06:07.966 23:31:56 json_config_extra_key -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:07.966 23:31:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.966 ************************************ 00:06:07.966 END TEST json_config_extra_key 00:06:07.966 ************************************ 00:06:07.966 23:31:56 -- common/autotest_common.sh@1136 -- # return 0 00:06:07.966 23:31:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:07.966 23:31:56 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:07.966 23:31:56 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:07.966 23:31:56 -- common/autotest_common.sh@10 -- # set +x 00:06:07.966 ************************************ 00:06:07.966 START TEST alias_rpc 00:06:07.966 ************************************ 00:06:07.966 23:31:56 alias_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.225 * Looking for test storage... 00:06:08.225 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:08.225 23:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.225 23:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1298390 00:06:08.225 23:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.225 23:31:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1298390 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@823 -- # '[' -z 1298390 ']' 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:08.225 23:31:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.225 [2024-07-15 23:31:57.035869] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:08.225 [2024-07-15 23:31:57.035919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298390 ] 00:06:08.225 [2024-07-15 23:31:57.089927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.225 [2024-07-15 23:31:57.163183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.161 23:31:57 alias_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:09.161 23:31:57 alias_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:09.161 23:31:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:09.161 23:31:58 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1298390 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@942 -- # '[' -z 1298390 ']' 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@946 -- # kill -0 1298390 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@947 -- # uname 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1298390 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1298390' 00:06:09.161 killing process with pid 1298390 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@961 -- # kill 1298390 00:06:09.161 23:31:58 alias_rpc -- common/autotest_common.sh@966 -- # wait 1298390 00:06:09.418 00:06:09.418 real 0m1.452s 00:06:09.418 user 0m1.589s 00:06:09.418 sys 0m0.369s 00:06:09.418 23:31:58 alias_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:09.419 23:31:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.419 ************************************ 00:06:09.419 END TEST alias_rpc 00:06:09.419 ************************************ 00:06:09.419 23:31:58 -- common/autotest_common.sh@1136 -- # return 0 00:06:09.419 23:31:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:09.419 23:31:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:09.419 23:31:58 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:09.419 23:31:58 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:09.419 23:31:58 -- common/autotest_common.sh@10 -- # set +x 00:06:09.676 ************************************ 00:06:09.676 START TEST spdkcli_tcp 00:06:09.676 ************************************ 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:09.676 * Looking for test storage... 00:06:09.676 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1298677 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1298677 00:06:09.676 23:31:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@823 -- # '[' -z 1298677 ']' 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:09.676 23:31:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:09.676 [2024-07-15 23:31:58.558408] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:09.676 [2024-07-15 23:31:58.558456] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298677 ] 00:06:09.676 [2024-07-15 23:31:58.612024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.934 [2024-07-15 23:31:58.686979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.934 [2024-07-15 23:31:58.686982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.499 23:31:59 spdkcli_tcp -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:10.499 23:31:59 spdkcli_tcp -- common/autotest_common.sh@856 -- # return 0 00:06:10.499 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1298905 00:06:10.499 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.499 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:10.757 [ 00:06:10.757 "bdev_malloc_delete", 00:06:10.757 "bdev_malloc_create", 00:06:10.757 "bdev_null_resize", 00:06:10.757 "bdev_null_delete", 00:06:10.757 "bdev_null_create", 00:06:10.757 "bdev_nvme_cuse_unregister", 00:06:10.757 "bdev_nvme_cuse_register", 00:06:10.757 "bdev_opal_new_user", 00:06:10.757 "bdev_opal_set_lock_state", 00:06:10.757 "bdev_opal_delete", 00:06:10.757 "bdev_opal_get_info", 00:06:10.757 "bdev_opal_create", 00:06:10.757 "bdev_nvme_opal_revert", 00:06:10.757 "bdev_nvme_opal_init", 00:06:10.757 "bdev_nvme_send_cmd", 00:06:10.757 "bdev_nvme_get_path_iostat", 00:06:10.757 "bdev_nvme_get_mdns_discovery_info", 00:06:10.757 "bdev_nvme_stop_mdns_discovery", 00:06:10.757 "bdev_nvme_start_mdns_discovery", 00:06:10.757 "bdev_nvme_set_multipath_policy", 00:06:10.757 "bdev_nvme_set_preferred_path", 00:06:10.757 "bdev_nvme_get_io_paths", 00:06:10.757 "bdev_nvme_remove_error_injection", 00:06:10.757 "bdev_nvme_add_error_injection", 00:06:10.757 "bdev_nvme_get_discovery_info", 00:06:10.757 "bdev_nvme_stop_discovery", 00:06:10.757 "bdev_nvme_start_discovery", 00:06:10.757 "bdev_nvme_get_controller_health_info", 00:06:10.757 "bdev_nvme_disable_controller", 00:06:10.757 "bdev_nvme_enable_controller", 00:06:10.757 "bdev_nvme_reset_controller", 00:06:10.757 "bdev_nvme_get_transport_statistics", 00:06:10.757 "bdev_nvme_apply_firmware", 00:06:10.757 "bdev_nvme_detach_controller", 00:06:10.757 "bdev_nvme_get_controllers", 00:06:10.757 "bdev_nvme_attach_controller", 00:06:10.757 "bdev_nvme_set_hotplug", 00:06:10.757 "bdev_nvme_set_options", 00:06:10.757 "bdev_passthru_delete", 00:06:10.757 "bdev_passthru_create", 00:06:10.757 "bdev_lvol_set_parent_bdev", 00:06:10.757 "bdev_lvol_set_parent", 00:06:10.757 "bdev_lvol_check_shallow_copy", 00:06:10.757 "bdev_lvol_start_shallow_copy", 00:06:10.757 "bdev_lvol_grow_lvstore", 00:06:10.757 "bdev_lvol_get_lvols", 00:06:10.757 "bdev_lvol_get_lvstores", 00:06:10.757 "bdev_lvol_delete", 00:06:10.757 "bdev_lvol_set_read_only", 00:06:10.757 "bdev_lvol_resize", 00:06:10.757 "bdev_lvol_decouple_parent", 00:06:10.757 "bdev_lvol_inflate", 00:06:10.757 "bdev_lvol_rename", 00:06:10.757 "bdev_lvol_clone_bdev", 00:06:10.757 "bdev_lvol_clone", 00:06:10.757 "bdev_lvol_snapshot", 00:06:10.757 "bdev_lvol_create", 00:06:10.757 "bdev_lvol_delete_lvstore", 00:06:10.757 "bdev_lvol_rename_lvstore", 00:06:10.757 "bdev_lvol_create_lvstore", 00:06:10.757 "bdev_raid_set_options", 00:06:10.757 "bdev_raid_remove_base_bdev", 00:06:10.757 "bdev_raid_add_base_bdev", 00:06:10.757 "bdev_raid_delete", 00:06:10.757 "bdev_raid_create", 00:06:10.757 "bdev_raid_get_bdevs", 00:06:10.757 "bdev_error_inject_error", 00:06:10.757 "bdev_error_delete", 00:06:10.757 "bdev_error_create", 00:06:10.757 "bdev_split_delete", 00:06:10.757 "bdev_split_create", 00:06:10.757 "bdev_delay_delete", 00:06:10.757 "bdev_delay_create", 00:06:10.757 "bdev_delay_update_latency", 00:06:10.757 "bdev_zone_block_delete", 00:06:10.757 "bdev_zone_block_create", 00:06:10.757 "blobfs_create", 00:06:10.757 "blobfs_detect", 00:06:10.757 "blobfs_set_cache_size", 00:06:10.757 "bdev_aio_delete", 00:06:10.757 "bdev_aio_rescan", 00:06:10.757 "bdev_aio_create", 00:06:10.757 "bdev_ftl_set_property", 00:06:10.757 "bdev_ftl_get_properties", 00:06:10.757 "bdev_ftl_get_stats", 00:06:10.757 "bdev_ftl_unmap", 00:06:10.757 "bdev_ftl_unload", 00:06:10.757 "bdev_ftl_delete", 00:06:10.757 "bdev_ftl_load", 00:06:10.757 "bdev_ftl_create", 00:06:10.757 "bdev_virtio_attach_controller", 00:06:10.757 "bdev_virtio_scsi_get_devices", 00:06:10.757 "bdev_virtio_detach_controller", 00:06:10.757 "bdev_virtio_blk_set_hotplug", 00:06:10.757 "bdev_iscsi_delete", 00:06:10.757 "bdev_iscsi_create", 00:06:10.757 "bdev_iscsi_set_options", 00:06:10.757 "accel_error_inject_error", 00:06:10.757 "ioat_scan_accel_module", 00:06:10.757 "dsa_scan_accel_module", 00:06:10.757 "iaa_scan_accel_module", 00:06:10.757 "keyring_file_remove_key", 00:06:10.757 "keyring_file_add_key", 00:06:10.757 "keyring_linux_set_options", 00:06:10.757 "iscsi_get_histogram", 00:06:10.757 "iscsi_enable_histogram", 00:06:10.757 "iscsi_set_options", 00:06:10.757 "iscsi_get_auth_groups", 00:06:10.757 "iscsi_auth_group_remove_secret", 00:06:10.757 "iscsi_auth_group_add_secret", 00:06:10.757 "iscsi_delete_auth_group", 00:06:10.757 "iscsi_create_auth_group", 00:06:10.757 "iscsi_set_discovery_auth", 00:06:10.757 "iscsi_get_options", 00:06:10.757 "iscsi_target_node_request_logout", 00:06:10.757 "iscsi_target_node_set_redirect", 00:06:10.757 "iscsi_target_node_set_auth", 00:06:10.757 "iscsi_target_node_add_lun", 00:06:10.757 "iscsi_get_stats", 00:06:10.757 "iscsi_get_connections", 00:06:10.757 "iscsi_portal_group_set_auth", 00:06:10.757 "iscsi_start_portal_group", 00:06:10.757 "iscsi_delete_portal_group", 00:06:10.757 "iscsi_create_portal_group", 00:06:10.757 "iscsi_get_portal_groups", 00:06:10.757 "iscsi_delete_target_node", 00:06:10.757 "iscsi_target_node_remove_pg_ig_maps", 00:06:10.757 "iscsi_target_node_add_pg_ig_maps", 00:06:10.757 "iscsi_create_target_node", 00:06:10.757 "iscsi_get_target_nodes", 00:06:10.757 "iscsi_delete_initiator_group", 00:06:10.757 "iscsi_initiator_group_remove_initiators", 00:06:10.757 "iscsi_initiator_group_add_initiators", 00:06:10.757 "iscsi_create_initiator_group", 00:06:10.757 "iscsi_get_initiator_groups", 00:06:10.757 "nvmf_set_crdt", 00:06:10.757 "nvmf_set_config", 00:06:10.757 "nvmf_set_max_subsystems", 00:06:10.757 "nvmf_stop_mdns_prr", 00:06:10.757 "nvmf_publish_mdns_prr", 00:06:10.757 "nvmf_subsystem_get_listeners", 00:06:10.757 "nvmf_subsystem_get_qpairs", 00:06:10.757 "nvmf_subsystem_get_controllers", 00:06:10.757 "nvmf_get_stats", 00:06:10.757 "nvmf_get_transports", 00:06:10.757 "nvmf_create_transport", 00:06:10.757 "nvmf_get_targets", 00:06:10.757 "nvmf_delete_target", 00:06:10.757 "nvmf_create_target", 00:06:10.757 "nvmf_subsystem_allow_any_host", 00:06:10.757 "nvmf_subsystem_remove_host", 00:06:10.757 "nvmf_subsystem_add_host", 00:06:10.757 "nvmf_ns_remove_host", 00:06:10.757 "nvmf_ns_add_host", 00:06:10.757 "nvmf_subsystem_remove_ns", 00:06:10.757 "nvmf_subsystem_add_ns", 00:06:10.757 "nvmf_subsystem_listener_set_ana_state", 00:06:10.757 "nvmf_discovery_get_referrals", 00:06:10.757 "nvmf_discovery_remove_referral", 00:06:10.757 "nvmf_discovery_add_referral", 00:06:10.757 "nvmf_subsystem_remove_listener", 00:06:10.757 "nvmf_subsystem_add_listener", 00:06:10.757 "nvmf_delete_subsystem", 00:06:10.757 "nvmf_create_subsystem", 00:06:10.757 "nvmf_get_subsystems", 00:06:10.757 "env_dpdk_get_mem_stats", 00:06:10.757 "nbd_get_disks", 00:06:10.757 "nbd_stop_disk", 00:06:10.757 "nbd_start_disk", 00:06:10.757 "ublk_recover_disk", 00:06:10.757 "ublk_get_disks", 00:06:10.757 "ublk_stop_disk", 00:06:10.757 "ublk_start_disk", 00:06:10.757 "ublk_destroy_target", 00:06:10.757 "ublk_create_target", 00:06:10.757 "virtio_blk_create_transport", 00:06:10.757 "virtio_blk_get_transports", 00:06:10.757 "vhost_controller_set_coalescing", 00:06:10.757 "vhost_get_controllers", 00:06:10.757 "vhost_delete_controller", 00:06:10.757 "vhost_create_blk_controller", 00:06:10.757 "vhost_scsi_controller_remove_target", 00:06:10.757 "vhost_scsi_controller_add_target", 00:06:10.757 "vhost_start_scsi_controller", 00:06:10.757 "vhost_create_scsi_controller", 00:06:10.757 "thread_set_cpumask", 00:06:10.757 "framework_get_governor", 00:06:10.757 "framework_get_scheduler", 00:06:10.757 "framework_set_scheduler", 00:06:10.757 "framework_get_reactors", 00:06:10.757 "thread_get_io_channels", 00:06:10.757 "thread_get_pollers", 00:06:10.757 "thread_get_stats", 00:06:10.757 "framework_monitor_context_switch", 00:06:10.757 "spdk_kill_instance", 00:06:10.757 "log_enable_timestamps", 00:06:10.757 "log_get_flags", 00:06:10.757 "log_clear_flag", 00:06:10.757 "log_set_flag", 00:06:10.757 "log_get_level", 00:06:10.757 "log_set_level", 00:06:10.757 "log_get_print_level", 00:06:10.757 "log_set_print_level", 00:06:10.757 "framework_enable_cpumask_locks", 00:06:10.757 "framework_disable_cpumask_locks", 00:06:10.757 "framework_wait_init", 00:06:10.757 "framework_start_init", 00:06:10.757 "scsi_get_devices", 00:06:10.757 "bdev_get_histogram", 00:06:10.757 "bdev_enable_histogram", 00:06:10.757 "bdev_set_qos_limit", 00:06:10.757 "bdev_set_qd_sampling_period", 00:06:10.757 "bdev_get_bdevs", 00:06:10.757 "bdev_reset_iostat", 00:06:10.757 "bdev_get_iostat", 00:06:10.757 "bdev_examine", 00:06:10.757 "bdev_wait_for_examine", 00:06:10.757 "bdev_set_options", 00:06:10.757 "notify_get_notifications", 00:06:10.757 "notify_get_types", 00:06:10.757 "accel_get_stats", 00:06:10.757 "accel_set_options", 00:06:10.757 "accel_set_driver", 00:06:10.757 "accel_crypto_key_destroy", 00:06:10.757 "accel_crypto_keys_get", 00:06:10.757 "accel_crypto_key_create", 00:06:10.757 "accel_assign_opc", 00:06:10.757 "accel_get_module_info", 00:06:10.757 "accel_get_opc_assignments", 00:06:10.757 "vmd_rescan", 00:06:10.757 "vmd_remove_device", 00:06:10.757 "vmd_enable", 00:06:10.757 "sock_get_default_impl", 00:06:10.757 "sock_set_default_impl", 00:06:10.757 "sock_impl_set_options", 00:06:10.757 "sock_impl_get_options", 00:06:10.757 "iobuf_get_stats", 00:06:10.757 "iobuf_set_options", 00:06:10.757 "framework_get_pci_devices", 00:06:10.757 "framework_get_config", 00:06:10.757 "framework_get_subsystems", 00:06:10.757 "trace_get_info", 00:06:10.757 "trace_get_tpoint_group_mask", 00:06:10.757 "trace_disable_tpoint_group", 00:06:10.757 "trace_enable_tpoint_group", 00:06:10.757 "trace_clear_tpoint_mask", 00:06:10.757 "trace_set_tpoint_mask", 00:06:10.757 "keyring_get_keys", 00:06:10.757 "spdk_get_version", 00:06:10.757 "rpc_get_methods" 00:06:10.757 ] 00:06:10.757 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.757 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.757 23:31:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1298677 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@942 -- # '[' -z 1298677 ']' 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@946 -- # kill -0 1298677 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@947 -- # uname 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1298677 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1298677' 00:06:10.757 killing process with pid 1298677 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@961 -- # kill 1298677 00:06:10.757 23:31:59 spdkcli_tcp -- common/autotest_common.sh@966 -- # wait 1298677 00:06:11.015 00:06:11.015 real 0m1.487s 00:06:11.015 user 0m2.780s 00:06:11.015 sys 0m0.410s 00:06:11.015 23:31:59 spdkcli_tcp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:11.015 23:31:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 ************************************ 00:06:11.015 END TEST spdkcli_tcp 00:06:11.015 ************************************ 00:06:11.015 23:31:59 -- common/autotest_common.sh@1136 -- # return 0 00:06:11.015 23:31:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.015 23:31:59 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:11.015 23:31:59 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:11.015 23:31:59 -- common/autotest_common.sh@10 -- # set +x 00:06:11.015 ************************************ 00:06:11.015 START TEST dpdk_mem_utility 00:06:11.015 ************************************ 00:06:11.015 23:31:59 dpdk_mem_utility -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.272 * Looking for test storage... 00:06:11.272 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:11.273 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:11.273 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1298981 00:06:11.273 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.273 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1298981 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@823 -- # '[' -z 1298981 ']' 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:11.273 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.273 [2024-07-15 23:32:00.099750] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:11.273 [2024-07-15 23:32:00.099797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1298981 ] 00:06:11.273 [2024-07-15 23:32:00.153706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.273 [2024-07-15 23:32:00.233745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.200 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:12.200 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@856 -- # return 0 00:06:12.200 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:12.200 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:12.200 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:12.200 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.200 { 00:06:12.200 "filename": "/tmp/spdk_mem_dump.txt" 00:06:12.200 } 00:06:12.200 23:32:00 dpdk_mem_utility -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:12.200 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:12.200 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:12.200 1 heaps totaling size 814.000000 MiB 00:06:12.200 size: 814.000000 MiB heap id: 0 00:06:12.200 end heaps---------- 00:06:12.200 8 mempools totaling size 598.116089 MiB 00:06:12.200 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:12.200 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:12.200 size: 84.521057 MiB name: bdev_io_1298981 00:06:12.200 size: 51.011292 MiB name: evtpool_1298981 00:06:12.200 size: 50.003479 MiB name: msgpool_1298981 00:06:12.200 size: 21.763794 MiB name: PDU_Pool 00:06:12.200 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:12.200 size: 0.026123 MiB name: Session_Pool 00:06:12.200 end mempools------- 00:06:12.200 6 memzones totaling size 4.142822 MiB 00:06:12.200 size: 1.000366 MiB name: RG_ring_0_1298981 00:06:12.200 size: 1.000366 MiB name: RG_ring_1_1298981 00:06:12.200 size: 1.000366 MiB name: RG_ring_4_1298981 00:06:12.200 size: 1.000366 MiB name: RG_ring_5_1298981 00:06:12.200 size: 0.125366 MiB name: RG_ring_2_1298981 00:06:12.200 size: 0.015991 MiB name: RG_ring_3_1298981 00:06:12.200 end memzones------- 00:06:12.200 23:32:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:12.200 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:12.200 list of free elements. size: 12.519348 MiB 00:06:12.200 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:12.200 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:12.200 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:12.200 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:12.200 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:12.200 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:12.200 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:12.200 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:12.200 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:12.200 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:12.200 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:12.200 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:12.200 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:12.200 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:12.200 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:12.200 list of standard malloc elements. size: 199.218079 MiB 00:06:12.200 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:12.200 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:12.200 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:12.200 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:12.200 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:12.200 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:12.200 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:12.200 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:12.200 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:12.200 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:12.200 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:12.200 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:12.200 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:12.200 list of memzone associated elements. size: 602.262573 MiB 00:06:12.200 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:12.200 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:12.200 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:12.200 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:12.200 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:12.200 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1298981_0 00:06:12.200 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:12.200 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1298981_0 00:06:12.200 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:12.200 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1298981_0 00:06:12.200 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:12.200 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:12.200 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:12.200 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:12.200 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:12.200 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1298981 00:06:12.200 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:12.200 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1298981 00:06:12.200 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:12.200 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1298981 00:06:12.200 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:12.200 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:12.200 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:12.200 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:12.200 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:12.200 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:12.200 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:12.200 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:12.200 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:12.200 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1298981 00:06:12.200 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:12.200 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1298981 00:06:12.200 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:12.200 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1298981 00:06:12.200 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:12.200 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1298981 00:06:12.200 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:12.200 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1298981 00:06:12.200 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:12.200 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:12.200 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:12.200 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:12.200 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:12.200 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:12.200 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:12.200 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1298981 00:06:12.200 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:12.200 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:12.200 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:12.200 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:12.200 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:12.200 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1298981 00:06:12.200 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:12.200 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:12.200 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:12.200 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1298981 00:06:12.200 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:12.200 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1298981 00:06:12.200 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:12.200 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:12.200 23:32:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:12.200 23:32:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1298981 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@942 -- # '[' -z 1298981 ']' 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@946 -- # kill -0 1298981 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@947 -- # uname 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1298981 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1298981' 00:06:12.200 killing process with pid 1298981 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@961 -- # kill 1298981 00:06:12.200 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@966 -- # wait 1298981 00:06:12.457 00:06:12.457 real 0m1.389s 00:06:12.457 user 0m1.458s 00:06:12.457 sys 0m0.395s 00:06:12.457 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:12.457 23:32:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.457 ************************************ 00:06:12.457 END TEST dpdk_mem_utility 00:06:12.457 ************************************ 00:06:12.457 23:32:01 -- common/autotest_common.sh@1136 -- # return 0 00:06:12.457 23:32:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:12.457 23:32:01 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:12.457 23:32:01 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.457 23:32:01 -- common/autotest_common.sh@10 -- # set +x 00:06:12.457 ************************************ 00:06:12.457 START TEST event 00:06:12.457 ************************************ 00:06:12.457 23:32:01 event -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:12.714 * Looking for test storage... 00:06:12.714 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:12.714 23:32:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:12.714 23:32:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.714 23:32:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.714 23:32:01 event -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:06:12.714 23:32:01 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:12.714 23:32:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.714 ************************************ 00:06:12.714 START TEST event_perf 00:06:12.714 ************************************ 00:06:12.714 23:32:01 event.event_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.714 Running I/O for 1 seconds...[2024-07-15 23:32:01.558030] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:12.714 [2024-07-15 23:32:01.558096] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299277 ] 00:06:12.714 [2024-07-15 23:32:01.617073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.714 [2024-07-15 23:32:01.691282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.714 [2024-07-15 23:32:01.691382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.714 [2024-07-15 23:32:01.691714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.714 [2024-07-15 23:32:01.691717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.083 Running I/O for 1 seconds... 00:06:14.083 lcore 0: 213663 00:06:14.083 lcore 1: 213664 00:06:14.083 lcore 2: 213664 00:06:14.083 lcore 3: 213663 00:06:14.083 done. 00:06:14.083 00:06:14.083 real 0m1.224s 00:06:14.083 user 0m4.144s 00:06:14.083 sys 0m0.077s 00:06:14.083 23:32:02 event.event_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:14.083 23:32:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 ************************************ 00:06:14.083 END TEST event_perf 00:06:14.083 ************************************ 00:06:14.083 23:32:02 event -- common/autotest_common.sh@1136 -- # return 0 00:06:14.083 23:32:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:14.083 23:32:02 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:14.083 23:32:02 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:14.083 23:32:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.083 ************************************ 00:06:14.083 START TEST event_reactor 00:06:14.083 ************************************ 00:06:14.083 23:32:02 event.event_reactor -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:14.083 [2024-07-15 23:32:02.832670] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:14.083 [2024-07-15 23:32:02.832721] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299526 ] 00:06:14.083 [2024-07-15 23:32:02.886751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.083 [2024-07-15 23:32:02.958185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.452 test_start 00:06:15.452 oneshot 00:06:15.452 tick 100 00:06:15.452 tick 100 00:06:15.452 tick 250 00:06:15.452 tick 100 00:06:15.452 tick 100 00:06:15.452 tick 250 00:06:15.452 tick 100 00:06:15.452 tick 500 00:06:15.452 tick 100 00:06:15.452 tick 100 00:06:15.452 tick 250 00:06:15.452 tick 100 00:06:15.452 tick 100 00:06:15.452 test_end 00:06:15.452 00:06:15.452 real 0m1.204s 00:06:15.452 user 0m1.129s 00:06:15.452 sys 0m0.071s 00:06:15.452 23:32:04 event.event_reactor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:15.452 23:32:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.452 ************************************ 00:06:15.452 END TEST event_reactor 00:06:15.452 ************************************ 00:06:15.452 23:32:04 event -- common/autotest_common.sh@1136 -- # return 0 00:06:15.452 23:32:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.452 23:32:04 event -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:06:15.452 23:32:04 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:15.452 23:32:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.452 ************************************ 00:06:15.452 START TEST event_reactor_perf 00:06:15.452 ************************************ 00:06:15.452 23:32:04 event.event_reactor_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.452 [2024-07-15 23:32:04.104196] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:15.452 [2024-07-15 23:32:04.104262] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299772 ] 00:06:15.452 [2024-07-15 23:32:04.162269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.452 [2024-07-15 23:32:04.232444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.384 test_start 00:06:16.385 test_end 00:06:16.385 Performance: 507976 events per second 00:06:16.385 00:06:16.385 real 0m1.218s 00:06:16.385 user 0m1.143s 00:06:16.385 sys 0m0.071s 00:06:16.385 23:32:05 event.event_reactor_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:16.385 23:32:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.385 ************************************ 00:06:16.385 END TEST event_reactor_perf 00:06:16.385 ************************************ 00:06:16.385 23:32:05 event -- common/autotest_common.sh@1136 -- # return 0 00:06:16.385 23:32:05 event -- event/event.sh@49 -- # uname -s 00:06:16.385 23:32:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.385 23:32:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:16.385 23:32:05 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:16.385 23:32:05 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:16.385 23:32:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.643 ************************************ 00:06:16.643 START TEST event_scheduler 00:06:16.643 ************************************ 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:16.643 * Looking for test storage... 00:06:16.643 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:16.643 23:32:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.643 23:32:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1300046 00:06:16.643 23:32:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.643 23:32:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.643 23:32:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1300046 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@823 -- # '[' -z 1300046 ']' 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:16.643 23:32:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.643 [2024-07-15 23:32:05.492646] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:16.643 [2024-07-15 23:32:05.492695] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1300046 ] 00:06:16.643 [2024-07-15 23:32:05.541953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.643 [2024-07-15 23:32:05.616840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.643 [2024-07-15 23:32:05.616929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.643 [2024-07-15 23:32:05.617017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.643 [2024-07-15 23:32:05.617018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@856 -- # return 0 00:06:17.576 23:32:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 [2024-07-15 23:32:06.307407] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:17.576 [2024-07-15 23:32:06.307426] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:17.576 [2024-07-15 23:32:06.307435] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:17.576 [2024-07-15 23:32:06.307440] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:17.576 [2024-07-15 23:32:06.307445] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.576 23:32:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 [2024-07-15 23:32:06.382920] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.576 23:32:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:17.576 23:32:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:17.576 ************************************ 00:06:17.577 START TEST scheduler_create_thread 00:06:17.577 ************************************ 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1117 -- # scheduler_create_thread 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 2 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 3 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 4 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 5 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 6 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 7 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 8 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 9 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 10 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:17.577 23:32:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 23:32:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:18.143 23:32:07 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:18.143 23:32:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:18.143 23:32:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.621 23:32:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:19.621 23:32:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:19.621 23:32:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:19.621 23:32:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:19.621 23:32:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.554 23:32:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:20.554 00:06:20.554 real 0m3.098s 00:06:20.554 user 0m0.026s 00:06:20.554 sys 0m0.003s 00:06:20.554 23:32:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:20.554 23:32:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.554 ************************************ 00:06:20.554 END TEST scheduler_create_thread 00:06:20.554 ************************************ 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@1136 -- # return 0 00:06:20.811 23:32:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:20.811 23:32:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1300046 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@942 -- # '[' -z 1300046 ']' 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@946 -- # kill -0 1300046 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@947 -- # uname 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1300046 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1300046' 00:06:20.811 killing process with pid 1300046 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@961 -- # kill 1300046 00:06:20.811 23:32:09 event.event_scheduler -- common/autotest_common.sh@966 -- # wait 1300046 00:06:21.068 [2024-07-15 23:32:09.898228] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:21.327 00:06:21.327 real 0m4.749s 00:06:21.327 user 0m9.293s 00:06:21.327 sys 0m0.359s 00:06:21.327 23:32:10 event.event_scheduler -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:21.327 23:32:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.327 ************************************ 00:06:21.327 END TEST event_scheduler 00:06:21.327 ************************************ 00:06:21.327 23:32:10 event -- common/autotest_common.sh@1136 -- # return 0 00:06:21.327 23:32:10 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:21.327 23:32:10 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:21.327 23:32:10 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:21.327 23:32:10 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:21.327 23:32:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:21.327 ************************************ 00:06:21.327 START TEST app_repeat 00:06:21.327 ************************************ 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@1117 -- # app_repeat_test 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1301015 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1301015' 00:06:21.327 Process app_repeat pid: 1301015 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:21.327 spdk_app_start Round 0 00:06:21.327 23:32:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1301015 /var/tmp/spdk-nbd.sock 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1301015 ']' 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:21.327 23:32:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.327 [2024-07-15 23:32:10.221528] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:21.327 [2024-07-15 23:32:10.221588] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1301015 ] 00:06:21.327 [2024-07-15 23:32:10.279061] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.585 [2024-07-15 23:32:10.353909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.585 [2024-07-15 23:32:10.353911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.150 23:32:11 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:22.150 23:32:11 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:22.150 23:32:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.410 Malloc0 00:06:22.410 23:32:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.666 Malloc1 00:06:22.666 23:32:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.666 23:32:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.667 /dev/nbd0 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.667 1+0 records in 00:06:22.667 1+0 records out 00:06:22.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205338 s, 19.9 MB/s 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:22.667 23:32:11 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.667 23:32:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.924 /dev/nbd1 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.924 1+0 records in 00:06:22.924 1+0 records out 00:06:22.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196288 s, 20.9 MB/s 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:22.924 23:32:11 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.924 23:32:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.182 23:32:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.182 { 00:06:23.182 "nbd_device": "/dev/nbd0", 00:06:23.182 "bdev_name": "Malloc0" 00:06:23.182 }, 00:06:23.182 { 00:06:23.182 "nbd_device": "/dev/nbd1", 00:06:23.182 "bdev_name": "Malloc1" 00:06:23.182 } 00:06:23.182 ]' 00:06:23.182 23:32:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.182 { 00:06:23.182 "nbd_device": "/dev/nbd0", 00:06:23.182 "bdev_name": "Malloc0" 00:06:23.182 }, 00:06:23.182 { 00:06:23.182 "nbd_device": "/dev/nbd1", 00:06:23.182 "bdev_name": "Malloc1" 00:06:23.182 } 00:06:23.182 ]' 00:06:23.182 23:32:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.182 /dev/nbd1' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.182 /dev/nbd1' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.182 256+0 records in 00:06:23.182 256+0 records out 00:06:23.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103793 s, 101 MB/s 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.182 256+0 records in 00:06:23.182 256+0 records out 00:06:23.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130706 s, 80.2 MB/s 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.182 256+0 records in 00:06:23.182 256+0 records out 00:06:23.182 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141995 s, 73.8 MB/s 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.182 23:32:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.439 23:32:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.440 23:32:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:23.696 23:32:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:23.954 23:32:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:23.954 23:32:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:23.954 23:32:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:24.211 [2024-07-15 23:32:13.091271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.211 [2024-07-15 23:32:13.157695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.211 [2024-07-15 23:32:13.157697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.469 [2024-07-15 23:32:13.198296] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:24.469 [2024-07-15 23:32:13.198329] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:26.993 23:32:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:26.993 23:32:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:26.993 spdk_app_start Round 1 00:06:26.993 23:32:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1301015 /var/tmp/spdk-nbd.sock 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1301015 ']' 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:26.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:26.993 23:32:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.251 23:32:16 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:27.251 23:32:16 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:27.251 23:32:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.509 Malloc0 00:06:27.509 23:32:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:27.509 Malloc1 00:06:27.509 23:32:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.509 23:32:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:27.767 /dev/nbd0 00:06:27.767 23:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:27.767 23:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:27.767 1+0 records in 00:06:27.767 1+0 records out 00:06:27.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199283 s, 20.6 MB/s 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:27.767 23:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:27.767 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:27.767 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:27.767 23:32:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.025 /dev/nbd1 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.025 1+0 records in 00:06:28.025 1+0 records out 00:06:28.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199931 s, 20.5 MB/s 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:28.025 23:32:16 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.025 23:32:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:28.283 { 00:06:28.283 "nbd_device": "/dev/nbd0", 00:06:28.283 "bdev_name": "Malloc0" 00:06:28.283 }, 00:06:28.283 { 00:06:28.283 "nbd_device": "/dev/nbd1", 00:06:28.283 "bdev_name": "Malloc1" 00:06:28.283 } 00:06:28.283 ]' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:28.283 { 00:06:28.283 "nbd_device": "/dev/nbd0", 00:06:28.283 "bdev_name": "Malloc0" 00:06:28.283 }, 00:06:28.283 { 00:06:28.283 "nbd_device": "/dev/nbd1", 00:06:28.283 "bdev_name": "Malloc1" 00:06:28.283 } 00:06:28.283 ]' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:28.283 /dev/nbd1' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:28.283 /dev/nbd1' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:28.283 256+0 records in 00:06:28.283 256+0 records out 00:06:28.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102844 s, 102 MB/s 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:28.283 256+0 records in 00:06:28.283 256+0 records out 00:06:28.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141937 s, 73.9 MB/s 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:28.283 256+0 records in 00:06:28.283 256+0 records out 00:06:28.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143541 s, 73.1 MB/s 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.283 23:32:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.541 23:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:28.799 23:32:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:28.800 23:32:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:29.058 23:32:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:29.316 [2024-07-15 23:32:18.092363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:29.316 [2024-07-15 23:32:18.159018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.316 [2024-07-15 23:32:18.159020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.316 [2024-07-15 23:32:18.199717] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:29.316 [2024-07-15 23:32:18.199758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:32.591 23:32:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:32.591 23:32:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:32.591 spdk_app_start Round 2 00:06:32.591 23:32:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1301015 /var/tmp/spdk-nbd.sock 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1301015 ']' 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:32.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:32.591 23:32:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:32.591 23:32:21 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:32.591 23:32:21 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:32.591 23:32:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.592 Malloc0 00:06:32.592 23:32:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.592 Malloc1 00:06:32.592 23:32:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.592 23:32:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.848 /dev/nbd0 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd0 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd0 /proc/partitions 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.848 1+0 records in 00:06:32.848 1+0 records out 00:06:32.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182756 s, 22.4 MB/s 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.848 /dev/nbd1 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@860 -- # local nbd_name=nbd1 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@861 -- # local i 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@863 -- # (( i = 1 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@863 -- # (( i <= 20 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@864 -- # grep -q -w nbd1 /proc/partitions 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@865 -- # break 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@876 -- # (( i = 1 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@876 -- # (( i <= 20 )) 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@877 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.848 1+0 records in 00:06:32.848 1+0 records out 00:06:32.848 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182797 s, 22.4 MB/s 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@878 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@878 -- # size=4096 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@879 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@880 -- # '[' 4096 '!=' 0 ']' 00:06:32.848 23:32:21 event.app_repeat -- common/autotest_common.sh@881 -- # return 0 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.848 23:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.104 23:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.104 { 00:06:33.104 "nbd_device": "/dev/nbd0", 00:06:33.104 "bdev_name": "Malloc0" 00:06:33.104 }, 00:06:33.104 { 00:06:33.104 "nbd_device": "/dev/nbd1", 00:06:33.104 "bdev_name": "Malloc1" 00:06:33.104 } 00:06:33.104 ]' 00:06:33.104 23:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.104 { 00:06:33.104 "nbd_device": "/dev/nbd0", 00:06:33.104 "bdev_name": "Malloc0" 00:06:33.104 }, 00:06:33.104 { 00:06:33.104 "nbd_device": "/dev/nbd1", 00:06:33.104 "bdev_name": "Malloc1" 00:06:33.104 } 00:06:33.104 ]' 00:06:33.104 23:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.104 23:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.104 /dev/nbd1' 00:06:33.104 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.104 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.104 /dev/nbd1' 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.105 256+0 records in 00:06:33.105 256+0 records out 00:06:33.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103065 s, 102 MB/s 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.105 256+0 records in 00:06:33.105 256+0 records out 00:06:33.105 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013441 s, 78.0 MB/s 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.105 23:32:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.361 256+0 records in 00:06:33.361 256+0 records out 00:06:33.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143876 s, 72.9 MB/s 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.361 23:32:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.663 23:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:33.920 23:32:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:33.920 23:32:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.176 23:32:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:34.176 [2024-07-15 23:32:23.095182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.436 [2024-07-15 23:32:23.163523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.436 [2024-07-15 23:32:23.163525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.436 [2024-07-15 23:32:23.204323] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:34.436 [2024-07-15 23:32:23.204363] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.955 23:32:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1301015 /var/tmp/spdk-nbd.sock 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@823 -- # '[' -z 1301015 ']' 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:36.955 23:32:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@856 -- # return 0 00:06:37.212 23:32:26 event.app_repeat -- event/event.sh@39 -- # killprocess 1301015 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@942 -- # '[' -z 1301015 ']' 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@946 -- # kill -0 1301015 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@947 -- # uname 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1301015 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:37.212 23:32:26 event.app_repeat -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:37.213 23:32:26 event.app_repeat -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1301015' 00:06:37.213 killing process with pid 1301015 00:06:37.213 23:32:26 event.app_repeat -- common/autotest_common.sh@961 -- # kill 1301015 00:06:37.213 23:32:26 event.app_repeat -- common/autotest_common.sh@966 -- # wait 1301015 00:06:37.470 spdk_app_start is called in Round 0. 00:06:37.470 Shutdown signal received, stop current app iteration 00:06:37.470 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:06:37.470 spdk_app_start is called in Round 1. 00:06:37.470 Shutdown signal received, stop current app iteration 00:06:37.470 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:06:37.470 spdk_app_start is called in Round 2. 00:06:37.470 Shutdown signal received, stop current app iteration 00:06:37.470 Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 reinitialization... 00:06:37.470 spdk_app_start is called in Round 3. 00:06:37.470 Shutdown signal received, stop current app iteration 00:06:37.470 23:32:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:37.470 23:32:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:37.470 00:06:37.470 real 0m16.108s 00:06:37.470 user 0m34.783s 00:06:37.470 sys 0m2.389s 00:06:37.470 23:32:26 event.app_repeat -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:37.470 23:32:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:37.470 ************************************ 00:06:37.470 END TEST app_repeat 00:06:37.470 ************************************ 00:06:37.470 23:32:26 event -- common/autotest_common.sh@1136 -- # return 0 00:06:37.470 23:32:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:37.470 23:32:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.470 23:32:26 event -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:37.470 23:32:26 event -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:37.470 23:32:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.470 ************************************ 00:06:37.470 START TEST cpu_locks 00:06:37.470 ************************************ 00:06:37.470 23:32:26 event.cpu_locks -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:37.470 * Looking for test storage... 00:06:37.470 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:37.470 23:32:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:37.470 23:32:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:37.470 23:32:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:37.470 23:32:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:37.470 23:32:26 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:37.470 23:32:26 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:37.470 23:32:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.729 ************************************ 00:06:37.729 START TEST default_locks 00:06:37.729 ************************************ 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1117 -- # default_locks 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1303949 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1303949 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1303949 ']' 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:37.729 23:32:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.729 [2024-07-15 23:32:26.525121] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:37.729 [2024-07-15 23:32:26.525164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1303949 ] 00:06:37.729 [2024-07-15 23:32:26.580586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.729 [2024-07-15 23:32:26.659173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 0 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1303949 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1303949 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.660 lslocks: write error 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1303949 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@942 -- # '[' -z 1303949 ']' 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # kill -0 1303949 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # uname 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1303949 00:06:38.660 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:38.661 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:38.661 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1303949' 00:06:38.661 killing process with pid 1303949 00:06:38.661 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@961 -- # kill 1303949 00:06:38.661 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # wait 1303949 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1303949 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # local es=0 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1303949 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # waitforlisten 1303949 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@823 -- # '[' -z 1303949 ']' 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1303949) - No such process 00:06:38.918 ERROR: process (pid: 1303949) is no longer running 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # return 1 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@645 -- # es=1 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.918 00:06:38.918 real 0m1.305s 00:06:38.918 user 0m1.364s 00:06:38.918 sys 0m0.398s 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:38.918 23:32:27 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 ************************************ 00:06:38.918 END TEST default_locks 00:06:38.918 ************************************ 00:06:38.918 23:32:27 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:38.918 23:32:27 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.918 23:32:27 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:38.918 23:32:27 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:38.918 23:32:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 ************************************ 00:06:38.918 START TEST default_locks_via_rpc 00:06:38.918 ************************************ 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1117 -- # default_locks_via_rpc 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1304200 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1304200 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1304200 ']' 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.918 23:32:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.918 [2024-07-15 23:32:27.880572] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:38.918 [2024-07-15 23:32:27.880609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304200 ] 00:06:39.176 [2024-07-15 23:32:27.935088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.176 [2024-07-15 23:32:28.014931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1304200 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1304200 00:06:39.741 23:32:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1304200 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@942 -- # '[' -z 1304200 ']' 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # kill -0 1304200 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # uname 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1304200 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1304200' 00:06:40.307 killing process with pid 1304200 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@961 -- # kill 1304200 00:06:40.307 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # wait 1304200 00:06:40.565 00:06:40.565 real 0m1.555s 00:06:40.565 user 0m1.614s 00:06:40.565 sys 0m0.520s 00:06:40.565 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:40.565 23:32:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.565 ************************************ 00:06:40.565 END TEST default_locks_via_rpc 00:06:40.565 ************************************ 00:06:40.565 23:32:29 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:40.565 23:32:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:40.565 23:32:29 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:40.565 23:32:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:40.565 23:32:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.565 ************************************ 00:06:40.565 START TEST non_locking_app_on_locked_coremask 00:06:40.565 ************************************ 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # non_locking_app_on_locked_coremask 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1304531 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1304531 /var/tmp/spdk.sock 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1304531 ']' 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:40.565 23:32:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.565 [2024-07-15 23:32:29.490460] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:40.566 [2024-07-15 23:32:29.490495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304531 ] 00:06:40.566 [2024-07-15 23:32:29.543864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.823 [2024-07-15 23:32:29.623118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1304541 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1304541 /var/tmp/spdk2.sock 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1304541 ']' 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:41.393 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.394 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:41.394 23:32:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.394 [2024-07-15 23:32:30.325182] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:41.394 [2024-07-15 23:32:30.325229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304541 ] 00:06:41.651 [2024-07-15 23:32:30.397706] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.651 [2024-07-15 23:32:30.397728] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.651 [2024-07-15 23:32:30.547896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.215 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:42.215 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:42.215 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1304531 00:06:42.215 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1304531 00:06:42.215 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.778 lslocks: write error 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1304531 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1304531 ']' 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1304531 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1304531 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1304531' 00:06:42.778 killing process with pid 1304531 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1304531 00:06:42.778 23:32:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1304531 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1304541 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1304541 ']' 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1304541 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1304541 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1304541' 00:06:43.341 killing process with pid 1304541 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1304541 00:06:43.341 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1304541 00:06:43.598 00:06:43.598 real 0m3.033s 00:06:43.598 user 0m3.259s 00:06:43.598 sys 0m0.816s 00:06:43.598 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:43.598 23:32:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.598 ************************************ 00:06:43.598 END TEST non_locking_app_on_locked_coremask 00:06:43.598 ************************************ 00:06:43.598 23:32:32 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:43.598 23:32:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.598 23:32:32 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:43.598 23:32:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:43.598 23:32:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.598 ************************************ 00:06:43.598 START TEST locking_app_on_unlocked_coremask 00:06:43.598 ************************************ 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_unlocked_coremask 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1305037 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1305037 /var/tmp/spdk.sock 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1305037 ']' 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:43.599 23:32:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.856 [2024-07-15 23:32:32.607439] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:43.856 [2024-07-15 23:32:32.607482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305037 ] 00:06:43.856 [2024-07-15 23:32:32.657277] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.856 [2024-07-15 23:32:32.657301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.856 [2024-07-15 23:32:32.737557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1305237 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1305237 /var/tmp/spdk2.sock 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1305237 ']' 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:44.787 23:32:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.787 [2024-07-15 23:32:33.466543] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:44.787 [2024-07-15 23:32:33.466594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305237 ] 00:06:44.787 [2024-07-15 23:32:33.540990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.787 [2024-07-15 23:32:33.684643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.352 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:45.352 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:45.352 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1305237 00:06:45.352 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1305237 00:06:45.352 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.917 lslocks: write error 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1305037 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1305037 ']' 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1305037 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1305037 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1305037' 00:06:45.917 killing process with pid 1305037 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1305037 00:06:45.917 23:32:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1305037 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1305237 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1305237 ']' 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # kill -0 1305237 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1305237 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1305237' 00:06:46.481 killing process with pid 1305237 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@961 -- # kill 1305237 00:06:46.481 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # wait 1305237 00:06:46.738 00:06:46.738 real 0m3.092s 00:06:46.738 user 0m3.355s 00:06:46.738 sys 0m0.851s 00:06:46.738 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:46.738 23:32:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.738 ************************************ 00:06:46.738 END TEST locking_app_on_unlocked_coremask 00:06:46.738 ************************************ 00:06:46.738 23:32:35 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:46.738 23:32:35 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:46.738 23:32:35 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:46.738 23:32:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:46.738 23:32:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:46.738 ************************************ 00:06:46.738 START TEST locking_app_on_locked_coremask 00:06:46.739 ************************************ 00:06:46.739 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1117 -- # locking_app_on_locked_coremask 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1305537 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1305537 /var/tmp/spdk.sock 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1305537 ']' 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:46.995 23:32:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:46.995 [2024-07-15 23:32:35.770132] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:46.995 [2024-07-15 23:32:35.770176] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305537 ] 00:06:46.995 [2024-07-15 23:32:35.824235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.995 [2024-07-15 23:32:35.892658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1305767 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1305767 /var/tmp/spdk2.sock 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1305767 /var/tmp/spdk2.sock 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1305767 /var/tmp/spdk2.sock 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@823 -- # '[' -z 1305767 ']' 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:47.926 23:32:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.926 [2024-07-15 23:32:36.607716] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:47.926 [2024-07-15 23:32:36.607762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1305767 ] 00:06:47.926 [2024-07-15 23:32:36.682806] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1305537 has claimed it. 00:06:47.926 [2024-07-15 23:32:36.682842] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.489 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1305767) - No such process 00:06:48.489 ERROR: process (pid: 1305767) is no longer running 00:06:48.489 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1305537 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1305537 00:06:48.490 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:48.747 lslocks: write error 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1305537 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@942 -- # '[' -z 1305537 ']' 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # kill -0 1305537 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # uname 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1305537 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1305537' 00:06:48.747 killing process with pid 1305537 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@961 -- # kill 1305537 00:06:48.747 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # wait 1305537 00:06:49.005 00:06:49.005 real 0m2.141s 00:06:49.005 user 0m2.379s 00:06:49.005 sys 0m0.543s 00:06:49.005 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:49.005 23:32:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.005 ************************************ 00:06:49.005 END TEST locking_app_on_locked_coremask 00:06:49.005 ************************************ 00:06:49.005 23:32:37 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:49.005 23:32:37 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.005 23:32:37 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:49.005 23:32:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:49.005 23:32:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.005 ************************************ 00:06:49.005 START TEST locking_overlapped_coremask 00:06:49.005 ************************************ 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1306023 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1306023 /var/tmp/spdk.sock 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1306023 ']' 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:49.005 23:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.005 [2024-07-15 23:32:37.976861] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:49.005 [2024-07-15 23:32:37.976903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306023 ] 00:06:49.262 [2024-07-15 23:32:38.033558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.262 [2024-07-15 23:32:38.103525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.262 [2024-07-15 23:32:38.103625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.262 [2024-07-15 23:32:38.103628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 0 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1306159 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1306159 /var/tmp/spdk2.sock 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # local es=0 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # valid_exec_arg waitforlisten 1306159 /var/tmp/spdk2.sock 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@630 -- # local arg=waitforlisten 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # type -t waitforlisten 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # waitforlisten 1306159 /var/tmp/spdk2.sock 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@823 -- # '[' -z 1306159 ']' 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:49.825 23:32:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.083 [2024-07-15 23:32:38.827884] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:50.083 [2024-07-15 23:32:38.827935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306159 ] 00:06:50.083 [2024-07-15 23:32:38.903962] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1306023 has claimed it. 00:06:50.083 [2024-07-15 23:32:38.904001] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:50.711 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 838: kill: (1306159) - No such process 00:06:50.711 ERROR: process (pid: 1306159) is no longer running 00:06:50.711 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:50.711 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # return 1 00:06:50.711 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@645 -- # es=1 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1306023 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@942 -- # '[' -z 1306023 ']' 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # kill -0 1306023 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # uname 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1306023 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1306023' 00:06:50.712 killing process with pid 1306023 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@961 -- # kill 1306023 00:06:50.712 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # wait 1306023 00:06:50.969 00:06:50.969 real 0m1.884s 00:06:50.969 user 0m5.351s 00:06:50.969 sys 0m0.395s 00:06:50.969 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:50.969 23:32:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.969 ************************************ 00:06:50.969 END TEST locking_overlapped_coremask 00:06:50.969 ************************************ 00:06:50.969 23:32:39 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:50.969 23:32:39 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.969 23:32:39 event.cpu_locks -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:50.969 23:32:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:50.969 23:32:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.969 ************************************ 00:06:50.969 START TEST locking_overlapped_coremask_via_rpc 00:06:50.969 ************************************ 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1117 -- # locking_overlapped_coremask_via_rpc 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1306306 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1306306 /var/tmp/spdk.sock 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1306306 ']' 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:50.970 23:32:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.970 [2024-07-15 23:32:39.933715] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:50.970 [2024-07-15 23:32:39.933758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306306 ] 00:06:51.227 [2024-07-15 23:32:39.988203] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.227 [2024-07-15 23:32:39.988226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.227 [2024-07-15 23:32:40.079995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.227 [2024-07-15 23:32:40.080091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.227 [2024-07-15 23:32:40.080093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1306533 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1306533 /var/tmp/spdk2.sock 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1306533 ']' 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:51.793 23:32:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:51.793 [2024-07-15 23:32:40.769748] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:51.793 [2024-07-15 23:32:40.769805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306533 ] 00:06:52.051 [2024-07-15 23:32:40.844034] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.051 [2024-07-15 23:32:40.844063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.051 [2024-07-15 23:32:40.990166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.051 [2024-07-15 23:32:40.993579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.051 [2024-07-15 23:32:40.993579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # local es=0 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:52.617 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.617 [2024-07-15 23:32:41.592605] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1306306 has claimed it. 00:06:52.875 request: 00:06:52.875 { 00:06:52.875 "method": "framework_enable_cpumask_locks", 00:06:52.875 "req_id": 1 00:06:52.875 } 00:06:52.875 Got JSON-RPC error response 00:06:52.875 response: 00:06:52.875 { 00:06:52.875 "code": -32603, 00:06:52.875 "message": "Failed to claim CPU core: 2" 00:06:52.875 } 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@645 -- # es=1 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1306306 /var/tmp/spdk.sock 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1306306 ']' 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1306533 /var/tmp/spdk2.sock 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@823 -- # '[' -z 1306533 ']' 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:52.875 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # return 0 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:53.134 00:06:53.134 real 0m2.096s 00:06:53.134 user 0m0.850s 00:06:53.134 sys 0m0.175s 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:53.134 23:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.134 ************************************ 00:06:53.134 END TEST locking_overlapped_coremask_via_rpc 00:06:53.134 ************************************ 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@1136 -- # return 0 00:06:53.134 23:32:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:53.134 23:32:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1306306 ]] 00:06:53.134 23:32:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1306306 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1306306 ']' 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1306306 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1306306 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1306306' 00:06:53.134 killing process with pid 1306306 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1306306 00:06:53.134 23:32:42 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1306306 00:06:53.393 23:32:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1306533 ]] 00:06:53.393 23:32:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1306533 00:06:53.393 23:32:42 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1306533 ']' 00:06:53.393 23:32:42 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1306533 00:06:53.393 23:32:42 event.cpu_locks -- common/autotest_common.sh@947 -- # uname 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1306533 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1306533' 00:06:53.651 killing process with pid 1306533 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@961 -- # kill 1306533 00:06:53.651 23:32:42 event.cpu_locks -- common/autotest_common.sh@966 -- # wait 1306533 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1306306 ]] 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1306306 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1306306 ']' 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1306306 00:06:53.910 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1306306) - No such process 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1306306 is not found' 00:06:53.910 Process with pid 1306306 is not found 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1306533 ]] 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1306533 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@942 -- # '[' -z 1306533 ']' 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@946 -- # kill -0 1306533 00:06:53.910 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 946: kill: (1306533) - No such process 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@969 -- # echo 'Process with pid 1306533 is not found' 00:06:53.910 Process with pid 1306533 is not found 00:06:53.910 23:32:42 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.910 00:06:53.910 real 0m16.376s 00:06:53.910 user 0m28.660s 00:06:53.910 sys 0m4.564s 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:53.910 23:32:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.910 ************************************ 00:06:53.910 END TEST cpu_locks 00:06:53.910 ************************************ 00:06:53.910 23:32:42 event -- common/autotest_common.sh@1136 -- # return 0 00:06:53.910 00:06:53.910 real 0m41.344s 00:06:53.910 user 1m19.338s 00:06:53.910 sys 0m7.839s 00:06:53.910 23:32:42 event -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:53.910 23:32:42 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.910 ************************************ 00:06:53.910 END TEST event 00:06:53.910 ************************************ 00:06:53.910 23:32:42 -- common/autotest_common.sh@1136 -- # return 0 00:06:53.911 23:32:42 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:53.911 23:32:42 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:53.911 23:32:42 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:53.911 23:32:42 -- common/autotest_common.sh@10 -- # set +x 00:06:53.911 ************************************ 00:06:53.911 START TEST thread 00:06:53.911 ************************************ 00:06:53.911 23:32:42 thread -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:54.169 * Looking for test storage... 00:06:54.169 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:54.169 23:32:42 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.169 23:32:42 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:54.169 23:32:42 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:54.169 23:32:42 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.169 ************************************ 00:06:54.169 START TEST thread_poller_perf 00:06:54.169 ************************************ 00:06:54.169 23:32:42 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:54.169 [2024-07-15 23:32:42.977745] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:54.169 [2024-07-15 23:32:42.977814] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307038 ] 00:06:54.169 [2024-07-15 23:32:43.037585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.169 [2024-07-15 23:32:43.109523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.169 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:55.554 ====================================== 00:06:55.554 busy:2106480240 (cyc) 00:06:55.554 total_run_count: 426000 00:06:55.554 tsc_hz: 2100000000 (cyc) 00:06:55.554 ====================================== 00:06:55.554 poller_cost: 4944 (cyc), 2354 (nsec) 00:06:55.554 00:06:55.554 real 0m1.227s 00:06:55.554 user 0m1.145s 00:06:55.554 sys 0m0.078s 00:06:55.554 23:32:44 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:55.554 23:32:44 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.554 ************************************ 00:06:55.554 END TEST thread_poller_perf 00:06:55.554 ************************************ 00:06:55.554 23:32:44 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:55.554 23:32:44 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.554 23:32:44 thread -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:06:55.554 23:32:44 thread -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:55.554 23:32:44 thread -- common/autotest_common.sh@10 -- # set +x 00:06:55.554 ************************************ 00:06:55.554 START TEST thread_poller_perf 00:06:55.554 ************************************ 00:06:55.554 23:32:44 thread.thread_poller_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:55.554 [2024-07-15 23:32:44.272223] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:55.554 [2024-07-15 23:32:44.272292] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307235 ] 00:06:55.554 [2024-07-15 23:32:44.332484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.554 [2024-07-15 23:32:44.404652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.554 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.930 ====================================== 00:06:56.930 busy:2101375072 (cyc) 00:06:56.930 total_run_count: 5628000 00:06:56.930 tsc_hz: 2100000000 (cyc) 00:06:56.930 ====================================== 00:06:56.930 poller_cost: 373 (cyc), 177 (nsec) 00:06:56.930 00:06:56.930 real 0m1.222s 00:06:56.930 user 0m1.149s 00:06:56.930 sys 0m0.070s 00:06:56.930 23:32:45 thread.thread_poller_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:56.930 23:32:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 ************************************ 00:06:56.930 END TEST thread_poller_perf 00:06:56.930 ************************************ 00:06:56.930 23:32:45 thread -- common/autotest_common.sh@1136 -- # return 0 00:06:56.930 23:32:45 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.930 00:06:56.930 real 0m2.672s 00:06:56.930 user 0m2.385s 00:06:56.930 sys 0m0.298s 00:06:56.930 23:32:45 thread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:56.930 23:32:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 ************************************ 00:06:56.930 END TEST thread 00:06:56.930 ************************************ 00:06:56.930 23:32:45 -- common/autotest_common.sh@1136 -- # return 0 00:06:56.930 23:32:45 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:56.930 23:32:45 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:06:56.930 23:32:45 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:56.930 23:32:45 -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 ************************************ 00:06:56.930 START TEST accel 00:06:56.930 ************************************ 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:06:56.930 * Looking for test storage... 00:06:56.930 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:06:56.930 23:32:45 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:56.930 23:32:45 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:56.930 23:32:45 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.930 23:32:45 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1307568 00:06:56.930 23:32:45 accel -- accel/accel.sh@63 -- # waitforlisten 1307568 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@823 -- # '[' -z 1307568 ']' 00:06:56.930 23:32:45 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.930 23:32:45 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@828 -- # local max_retries=100 00:06:56.930 23:32:45 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@832 -- # xtrace_disable 00:06:56.930 23:32:45 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.930 23:32:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.930 23:32:45 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.930 23:32:45 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.930 23:32:45 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.930 23:32:45 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:56.930 23:32:45 accel -- accel/accel.sh@41 -- # jq -r . 00:06:56.930 [2024-07-15 23:32:45.714631] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:56.930 [2024-07-15 23:32:45.714681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307568 ] 00:06:56.930 [2024-07-15 23:32:45.771690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.930 [2024-07-15 23:32:45.851108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.862 23:32:46 accel -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:06:57.862 23:32:46 accel -- common/autotest_common.sh@856 -- # return 0 00:06:57.862 23:32:46 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:57.862 23:32:46 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:57.862 23:32:46 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:57.862 23:32:46 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:57.862 23:32:46 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.862 23:32:46 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:57.862 23:32:46 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.862 23:32:46 accel -- common/autotest_common.sh@553 -- # xtrace_disable 00:06:57.862 23:32:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.862 23:32:46 accel -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:06:57.862 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.862 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.862 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.862 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.862 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.862 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.862 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # IFS== 00:06:57.863 23:32:46 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:57.863 23:32:46 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:57.863 23:32:46 accel -- accel/accel.sh@75 -- # killprocess 1307568 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@942 -- # '[' -z 1307568 ']' 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@946 -- # kill -0 1307568 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@947 -- # uname 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1307568 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1307568' 00:06:57.863 killing process with pid 1307568 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@961 -- # kill 1307568 00:06:57.863 23:32:46 accel -- common/autotest_common.sh@966 -- # wait 1307568 00:06:58.121 23:32:46 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:58.121 23:32:46 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.121 23:32:46 accel.accel_help -- common/autotest_common.sh@1117 -- # accel_perf -h 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:58.121 23:32:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:58.121 23:32:46 accel.accel_help -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.121 23:32:46 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:58.121 23:32:46 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.121 23:32:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.121 ************************************ 00:06:58.121 START TEST accel_missing_filename 00:06:58.121 ************************************ 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@642 -- # local es=0 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.121 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.121 23:32:47 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.122 23:32:47 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.122 23:32:47 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:58.122 23:32:47 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:58.122 [2024-07-15 23:32:47.048844] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:58.122 [2024-07-15 23:32:47.048909] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307824 ] 00:06:58.380 [2024-07-15 23:32:47.108961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.380 [2024-07-15 23:32:47.184766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.380 [2024-07-15 23:32:47.225465] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.380 [2024-07-15 23:32:47.285110] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.380 A filename is required. 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@645 -- # es=234 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@654 -- # es=106 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@655 -- # case "$es" in 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # es=1 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:58.380 00:06:58.380 real 0m0.332s 00:06:58.380 user 0m0.249s 00:06:58.380 sys 0m0.124s 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.380 23:32:47 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:58.380 ************************************ 00:06:58.380 END TEST accel_missing_filename 00:06:58.380 ************************************ 00:06:58.643 23:32:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:58.643 23:32:47 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:58.643 23:32:47 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:58.643 23:32:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.643 23:32:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.643 ************************************ 00:06:58.643 START TEST accel_compress_verify 00:06:58.643 ************************************ 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@642 -- # local es=0 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.643 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:58.643 23:32:47 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:58.643 [2024-07-15 23:32:47.443396] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:58.643 [2024-07-15 23:32:47.443462] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307919 ] 00:06:58.643 [2024-07-15 23:32:47.500674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.643 [2024-07-15 23:32:47.572668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.643 [2024-07-15 23:32:47.613306] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.902 [2024-07-15 23:32:47.673644] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:58.902 00:06:58.902 Compression does not support the verify option, aborting. 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@645 -- # es=161 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@654 -- # es=33 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@655 -- # case "$es" in 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # es=1 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:58.902 00:06:58.902 real 0m0.328s 00:06:58.902 user 0m0.259s 00:06:58.902 sys 0m0.109s 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.902 23:32:47 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.902 ************************************ 00:06:58.902 END TEST accel_compress_verify 00:06:58.902 ************************************ 00:06:58.902 23:32:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:58.902 23:32:47 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.902 23:32:47 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:06:58.902 23:32:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.902 23:32:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.902 ************************************ 00:06:58.902 START TEST accel_wrong_workload 00:06:58.902 ************************************ 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w foobar 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@642 -- # local es=0 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:58.902 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w foobar 00:06:58.902 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.902 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:58.903 23:32:47 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:58.903 Unsupported workload type: foobar 00:06:58.903 [2024-07-15 23:32:47.822195] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.903 accel_perf options: 00:06:58.903 [-h help message] 00:06:58.903 [-q queue depth per core] 00:06:58.903 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.903 [-T number of threads per core 00:06:58.903 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.903 [-t time in seconds] 00:06:58.903 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.903 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:58.903 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.903 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.903 [-S for crc32c workload, use this seed value (default 0) 00:06:58.903 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.903 [-f for fill workload, use this BYTE value (default 255) 00:06:58.903 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.903 [-y verify result if this switch is on] 00:06:58.903 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.903 Can be used to spread operations across a wider range of memory. 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@645 -- # es=1 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:58.903 00:06:58.903 real 0m0.023s 00:06:58.903 user 0m0.019s 00:06:58.903 sys 0m0.004s 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:58.903 23:32:47 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:58.903 ************************************ 00:06:58.903 END TEST accel_wrong_workload 00:06:58.903 ************************************ 00:06:58.903 Error: writing output failed: Broken pipe 00:06:58.903 23:32:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:58.903 23:32:47 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.903 23:32:47 accel -- common/autotest_common.sh@1093 -- # '[' 10 -le 1 ']' 00:06:58.903 23:32:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:58.903 23:32:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.161 ************************************ 00:06:59.161 START TEST accel_negative_buffers 00:06:59.161 ************************************ 00:06:59.161 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@1117 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:59.161 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@642 -- # local es=0 00:06:59.161 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@644 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@630 -- # local arg=accel_perf 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # type -t accel_perf 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # accel_perf -t 1 -w xor -y -x -1 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:59.162 23:32:47 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:59.162 -x option must be non-negative. 00:06:59.162 [2024-07-15 23:32:47.918733] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:59.162 accel_perf options: 00:06:59.162 [-h help message] 00:06:59.162 [-q queue depth per core] 00:06:59.162 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:59.162 [-T number of threads per core 00:06:59.162 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:59.162 [-t time in seconds] 00:06:59.162 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:59.162 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:59.162 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:59.162 [-l for compress/decompress workloads, name of uncompressed input file 00:06:59.162 [-S for crc32c workload, use this seed value (default 0) 00:06:59.162 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:59.162 [-f for fill workload, use this BYTE value (default 255) 00:06:59.162 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:59.162 [-y verify result if this switch is on] 00:06:59.162 [-a tasks to allocate per core (default: same value as -q)] 00:06:59.162 Can be used to spread operations across a wider range of memory. 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@645 -- # es=1 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:06:59.162 00:06:59.162 real 0m0.035s 00:06:59.162 user 0m0.023s 00:06:59.162 sys 0m0.012s 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@1118 -- # xtrace_disable 00:06:59.162 23:32:47 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:59.162 ************************************ 00:06:59.162 END TEST accel_negative_buffers 00:06:59.162 ************************************ 00:06:59.162 Error: writing output failed: Broken pipe 00:06:59.162 23:32:47 accel -- common/autotest_common.sh@1136 -- # return 0 00:06:59.162 23:32:47 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:59.162 23:32:47 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:06:59.162 23:32:47 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:06:59.162 23:32:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.162 ************************************ 00:06:59.162 START TEST accel_crc32c 00:06:59.162 ************************************ 00:06:59.162 23:32:47 accel.accel_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:59.162 23:32:47 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:59.162 [2024-07-15 23:32:48.010414] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:06:59.162 [2024-07-15 23:32:48.010472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1307987 ] 00:06:59.162 [2024-07-15 23:32:48.068420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.162 [2024-07-15 23:32:48.142826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:59.422 23:32:48 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:00.357 23:32:49 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.357 00:07:00.357 real 0m1.331s 00:07:00.357 user 0m1.219s 00:07:00.357 sys 0m0.117s 00:07:00.357 23:32:49 accel.accel_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:00.357 23:32:49 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:00.357 ************************************ 00:07:00.357 END TEST accel_crc32c 00:07:00.357 ************************************ 00:07:00.616 23:32:49 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:00.616 23:32:49 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:00.616 23:32:49 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:07:00.616 23:32:49 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:00.616 23:32:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.616 ************************************ 00:07:00.616 START TEST accel_crc32c_C2 00:07:00.616 ************************************ 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:00.616 [2024-07-15 23:32:49.408500] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:00.616 [2024-07-15 23:32:49.408576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308232 ] 00:07:00.616 [2024-07-15 23:32:49.467487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.616 [2024-07-15 23:32:49.545108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.616 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:00.617 23:32:49 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.994 00:07:01.994 real 0m1.341s 00:07:01.994 user 0m1.225s 00:07:01.994 sys 0m0.122s 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:01.994 23:32:50 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:01.994 ************************************ 00:07:01.994 END TEST accel_crc32c_C2 00:07:01.994 ************************************ 00:07:01.994 23:32:50 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:01.994 23:32:50 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:01.994 23:32:50 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:07:01.994 23:32:50 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:01.994 23:32:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.994 ************************************ 00:07:01.994 START TEST accel_copy 00:07:01.994 ************************************ 00:07:01.994 23:32:50 accel.accel_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy -y 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:01.994 23:32:50 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:01.994 [2024-07-15 23:32:50.807871] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:01.994 [2024-07-15 23:32:50.807936] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308490 ] 00:07:01.994 [2024-07-15 23:32:50.863439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.994 [2024-07-15 23:32:50.934860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:02.253 23:32:50 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:03.185 23:32:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.186 23:32:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:03.186 23:32:52 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.186 00:07:03.186 real 0m1.326s 00:07:03.186 user 0m1.219s 00:07:03.186 sys 0m0.112s 00:07:03.186 23:32:52 accel.accel_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:03.186 23:32:52 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:03.186 ************************************ 00:07:03.186 END TEST accel_copy 00:07:03.186 ************************************ 00:07:03.186 23:32:52 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:03.186 23:32:52 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.186 23:32:52 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:07:03.186 23:32:52 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:03.186 23:32:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.444 ************************************ 00:07:03.444 START TEST accel_fill 00:07:03.444 ************************************ 00:07:03.444 23:32:52 accel.accel_fill -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:03.444 [2024-07-15 23:32:52.193128] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:03.444 [2024-07-15 23:32:52.193197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308739 ] 00:07:03.444 [2024-07-15 23:32:52.249508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.444 [2024-07-15 23:32:52.321100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:03.444 23:32:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:04.817 23:32:53 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.817 00:07:04.817 real 0m1.329s 00:07:04.817 user 0m1.223s 00:07:04.817 sys 0m0.111s 00:07:04.817 23:32:53 accel.accel_fill -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:04.817 23:32:53 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:04.817 ************************************ 00:07:04.817 END TEST accel_fill 00:07:04.817 ************************************ 00:07:04.817 23:32:53 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:04.818 23:32:53 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:04.818 23:32:53 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:07:04.818 23:32:53 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:04.818 23:32:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.818 ************************************ 00:07:04.818 START TEST accel_copy_crc32c 00:07:04.818 ************************************ 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:04.818 [2024-07-15 23:32:53.580666] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:04.818 [2024-07-15 23:32:53.580716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1308986 ] 00:07:04.818 [2024-07-15 23:32:53.635594] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.818 [2024-07-15 23:32:53.707313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:04.818 23:32:53 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.193 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.194 00:07:06.194 real 0m1.328s 00:07:06.194 user 0m1.218s 00:07:06.194 sys 0m0.115s 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:06.194 23:32:54 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:06.194 ************************************ 00:07:06.194 END TEST accel_copy_crc32c 00:07:06.194 ************************************ 00:07:06.194 23:32:54 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:06.194 23:32:54 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.194 23:32:54 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:07:06.194 23:32:54 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:06.194 23:32:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.194 ************************************ 00:07:06.194 START TEST accel_copy_crc32c_C2 00:07:06.194 ************************************ 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:06.194 23:32:54 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:06.194 [2024-07-15 23:32:54.965942] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:06.194 [2024-07-15 23:32:54.965990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309237 ] 00:07:06.194 [2024-07-15 23:32:55.020758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.194 [2024-07-15 23:32:55.092345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:06.194 23:32:55 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.570 00:07:07.570 real 0m1.327s 00:07:07.570 user 0m1.219s 00:07:07.570 sys 0m0.114s 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:07.570 23:32:56 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:07.570 ************************************ 00:07:07.570 END TEST accel_copy_crc32c_C2 00:07:07.570 ************************************ 00:07:07.570 23:32:56 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:07.570 23:32:56 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:07.570 23:32:56 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:07:07.570 23:32:56 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:07.570 23:32:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:07.570 ************************************ 00:07:07.570 START TEST accel_dualcast 00:07:07.570 ************************************ 00:07:07.570 23:32:56 accel.accel_dualcast -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dualcast -y 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:07.570 23:32:56 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:07.571 [2024-07-15 23:32:56.350954] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:07.571 [2024-07-15 23:32:56.351018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309484 ] 00:07:07.571 [2024-07-15 23:32:56.406983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.571 [2024-07-15 23:32:56.478383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:07.571 23:32:56 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:08.945 23:32:57 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.945 00:07:08.945 real 0m1.328s 00:07:08.945 user 0m1.225s 00:07:08.945 sys 0m0.108s 00:07:08.945 23:32:57 accel.accel_dualcast -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:08.945 23:32:57 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:08.945 ************************************ 00:07:08.945 END TEST accel_dualcast 00:07:08.945 ************************************ 00:07:08.945 23:32:57 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:08.945 23:32:57 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:08.945 23:32:57 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:07:08.945 23:32:57 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:08.945 23:32:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.945 ************************************ 00:07:08.945 START TEST accel_compare 00:07:08.945 ************************************ 00:07:08.945 23:32:57 accel.accel_compare -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compare -y 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:08.945 [2024-07-15 23:32:57.736419] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:08.945 [2024-07-15 23:32:57.736485] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309732 ] 00:07:08.945 [2024-07-15 23:32:57.791437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.945 [2024-07-15 23:32:57.863593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.945 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.946 23:32:57 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.318 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:10.319 23:32:59 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.319 00:07:10.319 real 0m1.328s 00:07:10.319 user 0m1.221s 00:07:10.319 sys 0m0.111s 00:07:10.319 23:32:59 accel.accel_compare -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:10.319 23:32:59 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:10.319 ************************************ 00:07:10.319 END TEST accel_compare 00:07:10.319 ************************************ 00:07:10.319 23:32:59 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:10.319 23:32:59 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:10.319 23:32:59 accel -- common/autotest_common.sh@1093 -- # '[' 7 -le 1 ']' 00:07:10.319 23:32:59 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:10.319 23:32:59 accel -- common/autotest_common.sh@10 -- # set +x 00:07:10.319 ************************************ 00:07:10.319 START TEST accel_xor 00:07:10.319 ************************************ 00:07:10.319 23:32:59 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:10.319 [2024-07-15 23:32:59.119741] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:10.319 [2024-07-15 23:32:59.119803] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309983 ] 00:07:10.319 [2024-07-15 23:32:59.175722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.319 [2024-07-15 23:32:59.247650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:10.319 23:32:59 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.691 00:07:11.691 real 0m1.330s 00:07:11.691 user 0m1.221s 00:07:11.691 sys 0m0.113s 00:07:11.691 23:33:00 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:11.691 23:33:00 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.691 ************************************ 00:07:11.691 END TEST accel_xor 00:07:11.691 ************************************ 00:07:11.691 23:33:00 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:11.691 23:33:00 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:11.691 23:33:00 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:07:11.691 23:33:00 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:11.691 23:33:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.691 ************************************ 00:07:11.691 START TEST accel_xor 00:07:11.691 ************************************ 00:07:11.691 23:33:00 accel.accel_xor -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w xor -y -x 3 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:11.691 23:33:00 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:11.691 [2024-07-15 23:33:00.516008] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:11.691 [2024-07-15 23:33:00.516074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310268 ] 00:07:11.691 [2024-07-15 23:33:00.573531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.691 [2024-07-15 23:33:00.650158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.950 23:33:00 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:12.882 23:33:01 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.882 00:07:12.882 real 0m1.342s 00:07:12.882 user 0m1.237s 00:07:12.882 sys 0m0.117s 00:07:12.882 23:33:01 accel.accel_xor -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:12.882 23:33:01 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:12.882 ************************************ 00:07:12.882 END TEST accel_xor 00:07:12.882 ************************************ 00:07:12.882 23:33:01 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:13.140 23:33:01 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:13.140 23:33:01 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:07:13.140 23:33:01 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:13.140 23:33:01 accel -- common/autotest_common.sh@10 -- # set +x 00:07:13.140 ************************************ 00:07:13.140 START TEST accel_dif_verify 00:07:13.140 ************************************ 00:07:13.140 23:33:01 accel.accel_dif_verify -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_verify 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:13.140 23:33:01 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:13.140 [2024-07-15 23:33:01.923236] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:13.140 [2024-07-15 23:33:01.923283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310599 ] 00:07:13.140 [2024-07-15 23:33:01.979074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.140 [2024-07-15 23:33:02.051985] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.140 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:13.141 23:33:02 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:14.511 23:33:03 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.511 00:07:14.511 real 0m1.333s 00:07:14.511 user 0m1.234s 00:07:14.511 sys 0m0.113s 00:07:14.511 23:33:03 accel.accel_dif_verify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:14.511 23:33:03 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:14.511 ************************************ 00:07:14.511 END TEST accel_dif_verify 00:07:14.511 ************************************ 00:07:14.511 23:33:03 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:14.511 23:33:03 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:14.511 23:33:03 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:07:14.511 23:33:03 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:14.511 23:33:03 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.511 ************************************ 00:07:14.511 START TEST accel_dif_generate 00:07:14.511 ************************************ 00:07:14.511 23:33:03 accel.accel_dif_generate -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:14.511 23:33:03 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:14.511 [2024-07-15 23:33:03.318064] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:14.512 [2024-07-15 23:33:03.318130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1310860 ] 00:07:14.512 [2024-07-15 23:33:03.373099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.512 [2024-07-15 23:33:03.445258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.512 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.769 23:33:03 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.703 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:15.704 23:33:04 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.704 00:07:15.704 real 0m1.330s 00:07:15.704 user 0m1.227s 00:07:15.704 sys 0m0.118s 00:07:15.704 23:33:04 accel.accel_dif_generate -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:15.704 23:33:04 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:15.704 ************************************ 00:07:15.704 END TEST accel_dif_generate 00:07:15.704 ************************************ 00:07:15.704 23:33:04 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:15.704 23:33:04 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:15.704 23:33:04 accel -- common/autotest_common.sh@1093 -- # '[' 6 -le 1 ']' 00:07:15.704 23:33:04 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:15.704 23:33:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.963 ************************************ 00:07:15.963 START TEST accel_dif_generate_copy 00:07:15.963 ************************************ 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w dif_generate_copy 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:15.963 [2024-07-15 23:33:04.714017] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:15.963 [2024-07-15 23:33:04.714073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311106 ] 00:07:15.963 [2024-07-15 23:33:04.771212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.963 [2024-07-15 23:33:04.843209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.963 23:33:04 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.338 00:07:17.338 real 0m1.334s 00:07:17.338 user 0m1.225s 00:07:17.338 sys 0m0.120s 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:17.338 23:33:06 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:17.338 ************************************ 00:07:17.338 END TEST accel_dif_generate_copy 00:07:17.338 ************************************ 00:07:17.338 23:33:06 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:17.338 23:33:06 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:17.338 23:33:06 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.338 23:33:06 accel -- common/autotest_common.sh@1093 -- # '[' 8 -le 1 ']' 00:07:17.338 23:33:06 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:17.338 23:33:06 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.338 ************************************ 00:07:17.338 START TEST accel_comp 00:07:17.338 ************************************ 00:07:17.338 23:33:06 accel.accel_comp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:17.338 [2024-07-15 23:33:06.112777] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:17.338 [2024-07-15 23:33:06.112833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311359 ] 00:07:17.338 [2024-07-15 23:33:06.172359] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.338 [2024-07-15 23:33:06.246281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.338 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:17.339 23:33:06 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:18.712 23:33:07 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.712 00:07:18.712 real 0m1.340s 00:07:18.712 user 0m1.240s 00:07:18.712 sys 0m0.112s 00:07:18.712 23:33:07 accel.accel_comp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:18.712 23:33:07 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:18.712 ************************************ 00:07:18.712 END TEST accel_comp 00:07:18.712 ************************************ 00:07:18.712 23:33:07 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:18.712 23:33:07 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:18.712 23:33:07 accel -- common/autotest_common.sh@1093 -- # '[' 9 -le 1 ']' 00:07:18.712 23:33:07 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:18.712 23:33:07 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.712 ************************************ 00:07:18.712 START TEST accel_decomp 00:07:18.712 ************************************ 00:07:18.712 23:33:07 accel.accel_decomp -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:18.712 23:33:07 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:18.712 [2024-07-15 23:33:07.522642] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:18.712 [2024-07-15 23:33:07.522714] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1311608 ] 00:07:18.712 [2024-07-15 23:33:07.579502] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.712 [2024-07-15 23:33:07.652034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.971 23:33:07 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.903 23:33:08 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.903 00:07:19.903 real 0m1.339s 00:07:19.903 user 0m1.225s 00:07:19.903 sys 0m0.126s 00:07:19.903 23:33:08 accel.accel_decomp -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:19.903 23:33:08 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:19.903 ************************************ 00:07:19.903 END TEST accel_decomp 00:07:19.903 ************************************ 00:07:19.903 23:33:08 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:19.903 23:33:08 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:19.903 23:33:08 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:07:19.903 23:33:08 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:19.903 23:33:08 accel -- common/autotest_common.sh@10 -- # set +x 00:07:20.161 ************************************ 00:07:20.161 START TEST accel_decomp_full 00:07:20.161 ************************************ 00:07:20.161 23:33:08 accel.accel_decomp_full -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:20.161 23:33:08 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:20.161 [2024-07-15 23:33:08.928224] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:20.161 [2024-07-15 23:33:08.928273] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312071 ] 00:07:20.161 [2024-07-15 23:33:08.984712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.161 [2024-07-15 23:33:09.057048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:20.161 23:33:09 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.536 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.537 23:33:10 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.537 00:07:21.537 real 0m1.347s 00:07:21.537 user 0m1.237s 00:07:21.537 sys 0m0.122s 00:07:21.537 23:33:10 accel.accel_decomp_full -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:21.537 23:33:10 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:21.537 ************************************ 00:07:21.537 END TEST accel_decomp_full 00:07:21.537 ************************************ 00:07:21.537 23:33:10 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:21.537 23:33:10 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.537 23:33:10 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:07:21.537 23:33:10 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:21.537 23:33:10 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.537 ************************************ 00:07:21.537 START TEST accel_decomp_mcore 00:07:21.537 ************************************ 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.537 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.537 [2024-07-15 23:33:10.341694] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:21.537 [2024-07-15 23:33:10.341759] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312502 ] 00:07:21.537 [2024-07-15 23:33:10.398651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.537 [2024-07-15 23:33:10.475101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.537 [2024-07-15 23:33:10.475197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.537 [2024-07-15 23:33:10.475306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.537 [2024-07-15 23:33:10.475308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.795 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.796 23:33:10 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.728 00:07:22.728 real 0m1.351s 00:07:22.728 user 0m4.570s 00:07:22.728 sys 0m0.126s 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:22.728 23:33:11 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.728 ************************************ 00:07:22.728 END TEST accel_decomp_mcore 00:07:22.728 ************************************ 00:07:22.728 23:33:11 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:22.728 23:33:11 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.728 23:33:11 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:07:22.728 23:33:11 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:22.728 23:33:11 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.988 ************************************ 00:07:22.988 START TEST accel_decomp_full_mcore 00:07:22.988 ************************************ 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:22.988 [2024-07-15 23:33:11.759445] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:22.988 [2024-07-15 23:33:11.759512] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1312769 ] 00:07:22.988 [2024-07-15 23:33:11.815275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.988 [2024-07-15 23:33:11.891040] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.988 [2024-07-15 23:33:11.891136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.988 [2024-07-15 23:33:11.891227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.988 [2024-07-15 23:33:11.891229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.988 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.989 23:33:11 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.427 00:07:24.427 real 0m1.361s 00:07:24.427 user 0m4.615s 00:07:24.427 sys 0m0.123s 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:24.427 23:33:13 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:24.427 ************************************ 00:07:24.427 END TEST accel_decomp_full_mcore 00:07:24.427 ************************************ 00:07:24.427 23:33:13 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:24.427 23:33:13 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.427 23:33:13 accel -- common/autotest_common.sh@1093 -- # '[' 11 -le 1 ']' 00:07:24.427 23:33:13 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:24.427 23:33:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.427 ************************************ 00:07:24.427 START TEST accel_decomp_mthread 00:07:24.427 ************************************ 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.427 [2024-07-15 23:33:13.185323] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:24.427 [2024-07-15 23:33:13.185375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313053 ] 00:07:24.427 [2024-07-15 23:33:13.240771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.427 [2024-07-15 23:33:13.313838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.427 23:33:13 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.798 00:07:25.798 real 0m1.340s 00:07:25.798 user 0m1.245s 00:07:25.798 sys 0m0.110s 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:25.798 23:33:14 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.798 ************************************ 00:07:25.798 END TEST accel_decomp_mthread 00:07:25.798 ************************************ 00:07:25.798 23:33:14 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:25.798 23:33:14 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.798 23:33:14 accel -- common/autotest_common.sh@1093 -- # '[' 13 -le 1 ']' 00:07:25.798 23:33:14 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:25.798 23:33:14 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.798 ************************************ 00:07:25.798 START TEST accel_decomp_full_mthread 00:07:25.798 ************************************ 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1117 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:25.798 [2024-07-15 23:33:14.589728] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:25.798 [2024-07-15 23:33:14.589794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313318 ] 00:07:25.798 [2024-07-15 23:33:14.645517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.798 [2024-07-15 23:33:14.719200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.798 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.799 23:33:14 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.173 00:07:27.173 real 0m1.365s 00:07:27.173 user 0m1.264s 00:07:27.173 sys 0m0.114s 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:27.173 23:33:15 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 END TEST accel_decomp_full_mthread 00:07:27.173 ************************************ 00:07:27.173 23:33:15 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:27.173 23:33:15 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:27.173 23:33:15 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:27.173 23:33:15 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.173 23:33:15 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.173 23:33:15 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.173 23:33:15 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.173 23:33:15 accel -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:07:27.173 23:33:15 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.173 23:33:15 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.173 23:33:15 accel -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:27.173 23:33:15 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:27.173 23:33:15 accel -- accel/accel.sh@41 -- # jq -r . 00:07:27.173 23:33:15 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.173 ************************************ 00:07:27.173 START TEST accel_dif_functional_tests 00:07:27.173 ************************************ 00:07:27.173 23:33:15 accel.accel_dif_functional_tests -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:27.173 [2024-07-15 23:33:16.035545] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:27.173 [2024-07-15 23:33:16.035582] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313598 ] 00:07:27.173 [2024-07-15 23:33:16.087357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:27.431 [2024-07-15 23:33:16.162896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.431 [2024-07-15 23:33:16.162915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.431 [2024-07-15 23:33:16.162917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.431 00:07:27.431 00:07:27.431 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.431 http://cunit.sourceforge.net/ 00:07:27.431 00:07:27.431 00:07:27.431 Suite: accel_dif 00:07:27.431 Test: verify: DIF generated, GUARD check ...passed 00:07:27.431 Test: verify: DIF generated, APPTAG check ...passed 00:07:27.431 Test: verify: DIF generated, REFTAG check ...passed 00:07:27.431 Test: verify: DIF not generated, GUARD check ...[2024-07-15 23:33:16.231202] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.431 passed 00:07:27.431 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 23:33:16.231250] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.431 passed 00:07:27.431 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 23:33:16.231269] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.431 passed 00:07:27.431 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:27.431 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 23:33:16.231310] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:27.431 passed 00:07:27.431 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:27.431 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:27.431 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:27.431 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 23:33:16.231406] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:27.431 passed 00:07:27.431 Test: verify copy: DIF generated, GUARD check ...passed 00:07:27.431 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:27.431 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:27.431 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 23:33:16.231512] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:27.431 passed 00:07:27.431 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 23:33:16.231531] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:27.431 passed 00:07:27.431 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 23:33:16.231577] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:27.431 passed 00:07:27.431 Test: generate copy: DIF generated, GUARD check ...passed 00:07:27.431 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:27.431 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:27.431 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:27.431 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:27.431 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:27.431 Test: generate copy: iovecs-len validate ...[2024-07-15 23:33:16.231730] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:27.431 passed 00:07:27.431 Test: generate copy: buffer alignment validate ...passed 00:07:27.431 00:07:27.431 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.431 suites 1 1 n/a 0 0 00:07:27.431 tests 26 26 26 0 0 00:07:27.431 asserts 115 115 115 0 n/a 00:07:27.431 00:07:27.431 Elapsed time = 0.002 seconds 00:07:27.431 00:07:27.431 real 0m0.405s 00:07:27.431 user 0m0.588s 00:07:27.431 sys 0m0.131s 00:07:27.431 23:33:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:27.431 23:33:16 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:27.431 ************************************ 00:07:27.431 END TEST accel_dif_functional_tests 00:07:27.431 ************************************ 00:07:27.690 23:33:16 accel -- common/autotest_common.sh@1136 -- # return 0 00:07:27.690 00:07:27.690 real 0m30.854s 00:07:27.690 user 0m34.672s 00:07:27.690 sys 0m4.160s 00:07:27.690 23:33:16 accel -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:27.690 23:33:16 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.690 ************************************ 00:07:27.690 END TEST accel 00:07:27.690 ************************************ 00:07:27.690 23:33:16 -- common/autotest_common.sh@1136 -- # return 0 00:07:27.690 23:33:16 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.690 23:33:16 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:07:27.690 23:33:16 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:27.690 23:33:16 -- common/autotest_common.sh@10 -- # set +x 00:07:27.690 ************************************ 00:07:27.690 START TEST accel_rpc 00:07:27.690 ************************************ 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:27.690 * Looking for test storage... 00:07:27.690 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:27.690 23:33:16 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:27.690 23:33:16 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1313784 00:07:27.690 23:33:16 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:27.690 23:33:16 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1313784 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@823 -- # '[' -z 1313784 ']' 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:27.690 23:33:16 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.690 [2024-07-15 23:33:16.617089] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:27.690 [2024-07-15 23:33:16.617134] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1313784 ] 00:07:27.947 [2024-07-15 23:33:16.672414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.947 [2024-07-15 23:33:16.746660] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.511 23:33:17 accel_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:28.512 23:33:17 accel_rpc -- common/autotest_common.sh@856 -- # return 0 00:07:28.512 23:33:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:28.512 23:33:17 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:28.512 23:33:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:28.512 23:33:17 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:28.512 23:33:17 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:28.512 23:33:17 accel_rpc -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:07:28.512 23:33:17 accel_rpc -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:28.512 23:33:17 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.512 ************************************ 00:07:28.512 START TEST accel_assign_opcode 00:07:28.512 ************************************ 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1117 -- # accel_assign_opcode_test_suite 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.512 [2024-07-15 23:33:17.448750] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.512 [2024-07-15 23:33:17.460775] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:28.512 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:28.769 software 00:07:28.769 00:07:28.769 real 0m0.242s 00:07:28.769 user 0m0.046s 00:07:28.769 sys 0m0.011s 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:28.769 23:33:17 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:28.769 ************************************ 00:07:28.769 END TEST accel_assign_opcode 00:07:28.769 ************************************ 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@1136 -- # return 0 00:07:28.769 23:33:17 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1313784 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@942 -- # '[' -z 1313784 ']' 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@946 -- # kill -0 1313784 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@947 -- # uname 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:28.769 23:33:17 accel_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1313784 00:07:29.026 23:33:17 accel_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:29.026 23:33:17 accel_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:29.026 23:33:17 accel_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1313784' 00:07:29.026 killing process with pid 1313784 00:07:29.026 23:33:17 accel_rpc -- common/autotest_common.sh@961 -- # kill 1313784 00:07:29.026 23:33:17 accel_rpc -- common/autotest_common.sh@966 -- # wait 1313784 00:07:29.283 00:07:29.283 real 0m1.580s 00:07:29.283 user 0m1.658s 00:07:29.283 sys 0m0.415s 00:07:29.283 23:33:18 accel_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:29.283 23:33:18 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.283 ************************************ 00:07:29.283 END TEST accel_rpc 00:07:29.283 ************************************ 00:07:29.283 23:33:18 -- common/autotest_common.sh@1136 -- # return 0 00:07:29.283 23:33:18 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.283 23:33:18 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:07:29.283 23:33:18 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:29.283 23:33:18 -- common/autotest_common.sh@10 -- # set +x 00:07:29.283 ************************************ 00:07:29.283 START TEST app_cmdline 00:07:29.283 ************************************ 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:29.283 * Looking for test storage... 00:07:29.283 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:29.283 23:33:18 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:29.283 23:33:18 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1314088 00:07:29.283 23:33:18 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:29.283 23:33:18 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1314088 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@823 -- # '[' -z 1314088 ']' 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:29.283 23:33:18 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:29.283 [2024-07-15 23:33:18.264065] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:29.283 [2024-07-15 23:33:18.264114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314088 ] 00:07:29.540 [2024-07-15 23:33:18.316665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.540 [2024-07-15 23:33:18.391120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.103 23:33:19 app_cmdline -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:30.103 23:33:19 app_cmdline -- common/autotest_common.sh@856 -- # return 0 00:07:30.103 23:33:19 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:30.360 { 00:07:30.360 "version": "SPDK v24.09-pre git sha1 00bf4c571", 00:07:30.360 "fields": { 00:07:30.360 "major": 24, 00:07:30.360 "minor": 9, 00:07:30.360 "patch": 0, 00:07:30.360 "suffix": "-pre", 00:07:30.360 "commit": "00bf4c571" 00:07:30.360 } 00:07:30.360 } 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:30.360 23:33:19 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@642 -- # local es=0 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:30.360 23:33:19 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:30.361 23:33:19 app_cmdline -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:30.618 request: 00:07:30.618 { 00:07:30.618 "method": "env_dpdk_get_mem_stats", 00:07:30.618 "req_id": 1 00:07:30.618 } 00:07:30.618 Got JSON-RPC error response 00:07:30.618 response: 00:07:30.618 { 00:07:30.618 "code": -32601, 00:07:30.618 "message": "Method not found" 00:07:30.618 } 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@645 -- # es=1 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:07:30.618 23:33:19 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1314088 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@942 -- # '[' -z 1314088 ']' 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@946 -- # kill -0 1314088 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@947 -- # uname 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1314088 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1314088' 00:07:30.618 killing process with pid 1314088 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@961 -- # kill 1314088 00:07:30.618 23:33:19 app_cmdline -- common/autotest_common.sh@966 -- # wait 1314088 00:07:30.876 00:07:30.876 real 0m1.649s 00:07:30.876 user 0m1.972s 00:07:30.876 sys 0m0.410s 00:07:30.876 23:33:19 app_cmdline -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:30.876 23:33:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:30.876 ************************************ 00:07:30.876 END TEST app_cmdline 00:07:30.876 ************************************ 00:07:30.876 23:33:19 -- common/autotest_common.sh@1136 -- # return 0 00:07:30.876 23:33:19 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:30.876 23:33:19 -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:07:30.876 23:33:19 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:30.876 23:33:19 -- common/autotest_common.sh@10 -- # set +x 00:07:30.876 ************************************ 00:07:30.876 START TEST version 00:07:30.876 ************************************ 00:07:30.876 23:33:19 version -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:31.132 * Looking for test storage... 00:07:31.132 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:31.132 23:33:19 version -- app/version.sh@17 -- # get_header_version major 00:07:31.132 23:33:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # cut -f2 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.132 23:33:19 version -- app/version.sh@17 -- # major=24 00:07:31.132 23:33:19 version -- app/version.sh@18 -- # get_header_version minor 00:07:31.132 23:33:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # cut -f2 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.132 23:33:19 version -- app/version.sh@18 -- # minor=9 00:07:31.132 23:33:19 version -- app/version.sh@19 -- # get_header_version patch 00:07:31.132 23:33:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # cut -f2 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.132 23:33:19 version -- app/version.sh@19 -- # patch=0 00:07:31.132 23:33:19 version -- app/version.sh@20 -- # get_header_version suffix 00:07:31.132 23:33:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # cut -f2 00:07:31.132 23:33:19 version -- app/version.sh@14 -- # tr -d '"' 00:07:31.132 23:33:19 version -- app/version.sh@20 -- # suffix=-pre 00:07:31.132 23:33:19 version -- app/version.sh@22 -- # version=24.9 00:07:31.132 23:33:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:31.132 23:33:19 version -- app/version.sh@28 -- # version=24.9rc0 00:07:31.132 23:33:19 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:31.132 23:33:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:31.132 23:33:19 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:31.132 23:33:19 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:31.132 00:07:31.132 real 0m0.148s 00:07:31.132 user 0m0.076s 00:07:31.132 sys 0m0.108s 00:07:31.132 23:33:19 version -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:31.132 23:33:19 version -- common/autotest_common.sh@10 -- # set +x 00:07:31.132 ************************************ 00:07:31.132 END TEST version 00:07:31.132 ************************************ 00:07:31.132 23:33:20 -- common/autotest_common.sh@1136 -- # return 0 00:07:31.132 23:33:20 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@198 -- # uname -s 00:07:31.133 23:33:20 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:31.133 23:33:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:31.133 23:33:20 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:31.133 23:33:20 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:31.133 23:33:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.133 23:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.133 23:33:20 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:31.133 23:33:20 -- spdk/autotest.sh@283 -- # '[' rdma = rdma ']' 00:07:31.133 23:33:20 -- spdk/autotest.sh@284 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:31.133 23:33:20 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:31.133 23:33:20 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:31.133 23:33:20 -- common/autotest_common.sh@10 -- # set +x 00:07:31.133 ************************************ 00:07:31.133 START TEST nvmf_rdma 00:07:31.133 ************************************ 00:07:31.133 23:33:20 nvmf_rdma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:31.391 * Looking for test storage... 00:07:31.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:31.391 23:33:20 nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.391 23:33:20 nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.391 23:33:20 nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.391 23:33:20 nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.391 23:33:20 nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.391 23:33:20 nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.391 23:33:20 nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:07:31.391 23:33:20 nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:31.391 23:33:20 nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.391 23:33:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:31.391 23:33:20 nvmf_rdma -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:31.391 23:33:20 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:31.391 23:33:20 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:31.391 23:33:20 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:31.391 ************************************ 00:07:31.391 START TEST nvmf_example 00:07:31.391 ************************************ 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:07:31.391 * Looking for test storage... 00:07:31.391 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:31.391 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:31.392 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:31.651 23:33:20 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:36.915 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:36.915 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:36.915 Found net devices under 0000:da:00.0: mlx_0_0 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:36.915 Found net devices under 0000:da:00.1: mlx_0_1 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # uname 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.915 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:36.916 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.916 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:36.916 altname enp218s0f0np0 00:07:36.916 altname ens818f0np0 00:07:36.916 inet 192.168.100.8/24 scope global mlx_0_0 00:07:36.916 valid_lft forever preferred_lft forever 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:36.916 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:36.916 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:36.916 altname enp218s0f1np1 00:07:36.916 altname ens818f1np1 00:07:36.916 inet 192.168.100.9/24 scope global mlx_0_1 00:07:36.916 valid_lft forever preferred_lft forever 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:36.916 192.168.100.9' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:36.916 192.168.100.9' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:36.916 192.168.100.9' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1317520 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1317520 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@823 -- # '[' -z 1317520 ']' 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:36.916 23:33:25 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.480 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:37.480 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@856 -- # return 0 00:07:37.480 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:37.480 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:37.480 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:37.737 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:37.995 23:33:26 nvmf_rdma.nvmf_example -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:37.996 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:37.996 23:33:26 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:50.212 Initializing NVMe Controllers 00:07:50.212 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:50.212 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:50.212 Initialization complete. Launching workers. 00:07:50.212 ======================================================== 00:07:50.212 Latency(us) 00:07:50.212 Device Information : IOPS MiB/s Average min max 00:07:50.212 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 25797.50 100.77 2482.09 643.74 12998.17 00:07:50.212 ======================================================== 00:07:50.212 Total : 25797.50 100.77 2482.09 643.74 12998.17 00:07:50.212 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:50.212 rmmod nvme_rdma 00:07:50.212 rmmod nvme_fabrics 00:07:50.212 23:33:37 nvmf_rdma.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1317520 ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1317520 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@942 -- # '[' -z 1317520 ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@946 -- # kill -0 1317520 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@947 -- # uname 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1317520 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@948 -- # process_name=nvmf 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@952 -- # '[' nvmf = sudo ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1317520' 00:07:50.212 killing process with pid 1317520 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@961 -- # kill 1317520 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@966 -- # wait 1317520 00:07:50.212 nvmf threads initialize successfully 00:07:50.212 bdev subsystem init successfully 00:07:50.212 created a nvmf target service 00:07:50.212 create targets's poll groups done 00:07:50.212 all subsystems of target started 00:07:50.212 nvmf target is running 00:07:50.212 all subsystems of target stopped 00:07:50.212 destroy targets's poll groups done 00:07:50.212 destroyed the nvmf target service 00:07:50.212 bdev subsystem finish successfully 00:07:50.212 nvmf threads destroy successfully 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 00:07:50.212 real 0m18.073s 00:07:50.212 user 0m51.632s 00:07:50.212 sys 0m4.368s 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@1118 -- # xtrace_disable 00:07:50.212 23:33:38 nvmf_rdma.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 ************************************ 00:07:50.212 END TEST nvmf_example 00:07:50.212 ************************************ 00:07:50.212 23:33:38 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:07:50.212 23:33:38 nvmf_rdma -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:50.212 23:33:38 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:50.212 23:33:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:50.212 23:33:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:50.212 ************************************ 00:07:50.212 START TEST nvmf_filesystem 00:07:50.212 ************************************ 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:07:50.212 * Looking for test storage... 00:07:50.212 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.212 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:50.213 #define SPDK_CONFIG_H 00:07:50.213 #define SPDK_CONFIG_APPS 1 00:07:50.213 #define SPDK_CONFIG_ARCH native 00:07:50.213 #undef SPDK_CONFIG_ASAN 00:07:50.213 #undef SPDK_CONFIG_AVAHI 00:07:50.213 #undef SPDK_CONFIG_CET 00:07:50.213 #define SPDK_CONFIG_COVERAGE 1 00:07:50.213 #define SPDK_CONFIG_CROSS_PREFIX 00:07:50.213 #undef SPDK_CONFIG_CRYPTO 00:07:50.213 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:50.213 #undef SPDK_CONFIG_CUSTOMOCF 00:07:50.213 #undef SPDK_CONFIG_DAOS 00:07:50.213 #define SPDK_CONFIG_DAOS_DIR 00:07:50.213 #define SPDK_CONFIG_DEBUG 1 00:07:50.213 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:50.213 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:07:50.213 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:50.213 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:50.213 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:50.213 #undef SPDK_CONFIG_DPDK_UADK 00:07:50.213 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:07:50.213 #define SPDK_CONFIG_EXAMPLES 1 00:07:50.213 #undef SPDK_CONFIG_FC 00:07:50.213 #define SPDK_CONFIG_FC_PATH 00:07:50.213 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:50.213 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:50.213 #undef SPDK_CONFIG_FUSE 00:07:50.213 #undef SPDK_CONFIG_FUZZER 00:07:50.213 #define SPDK_CONFIG_FUZZER_LIB 00:07:50.213 #undef SPDK_CONFIG_GOLANG 00:07:50.213 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:50.213 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:50.213 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:50.213 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:50.213 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:50.213 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:50.213 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:50.213 #define SPDK_CONFIG_IDXD 1 00:07:50.213 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:50.213 #undef SPDK_CONFIG_IPSEC_MB 00:07:50.213 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:50.213 #define SPDK_CONFIG_ISAL 1 00:07:50.213 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:50.213 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:50.213 #define SPDK_CONFIG_LIBDIR 00:07:50.213 #undef SPDK_CONFIG_LTO 00:07:50.213 #define SPDK_CONFIG_MAX_LCORES 128 00:07:50.213 #define SPDK_CONFIG_NVME_CUSE 1 00:07:50.213 #undef SPDK_CONFIG_OCF 00:07:50.213 #define SPDK_CONFIG_OCF_PATH 00:07:50.213 #define SPDK_CONFIG_OPENSSL_PATH 00:07:50.213 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:50.213 #define SPDK_CONFIG_PGO_DIR 00:07:50.213 #undef SPDK_CONFIG_PGO_USE 00:07:50.213 #define SPDK_CONFIG_PREFIX /usr/local 00:07:50.213 #undef SPDK_CONFIG_RAID5F 00:07:50.213 #undef SPDK_CONFIG_RBD 00:07:50.213 #define SPDK_CONFIG_RDMA 1 00:07:50.213 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:50.213 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:50.213 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:50.213 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:50.213 #define SPDK_CONFIG_SHARED 1 00:07:50.213 #undef SPDK_CONFIG_SMA 00:07:50.213 #define SPDK_CONFIG_TESTS 1 00:07:50.213 #undef SPDK_CONFIG_TSAN 00:07:50.213 #define SPDK_CONFIG_UBLK 1 00:07:50.213 #define SPDK_CONFIG_UBSAN 1 00:07:50.213 #undef SPDK_CONFIG_UNIT_TESTS 00:07:50.213 #undef SPDK_CONFIG_URING 00:07:50.213 #define SPDK_CONFIG_URING_PATH 00:07:50.213 #undef SPDK_CONFIG_URING_ZNS 00:07:50.213 #undef SPDK_CONFIG_USDT 00:07:50.213 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:50.213 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:50.213 #undef SPDK_CONFIG_VFIO_USER 00:07:50.213 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:50.213 #define SPDK_CONFIG_VHOST 1 00:07:50.213 #define SPDK_CONFIG_VIRTIO 1 00:07:50.213 #undef SPDK_CONFIG_VTUNE 00:07:50.213 #define SPDK_CONFIG_VTUNE_DIR 00:07:50.213 #define SPDK_CONFIG_WERROR 1 00:07:50.213 #define SPDK_CONFIG_WPDK_DIR 00:07:50.213 #undef SPDK_CONFIG_XNVME 00:07:50.213 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.213 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:07:50.214 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@273 -- # MAKE=make 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@274 -- # MAKEFLAGS=-j96 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@290 -- # export HUGEMEM=4096 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@290 -- # HUGEMEM=4096 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@292 -- # NO_HUGE=() 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@293 -- # TEST_MODE= 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@294 -- # for i in "$@" 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@295 -- # case "$i" in 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@300 -- # TEST_TRANSPORT=rdma 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@312 -- # [[ -z 1319869 ]] 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@312 -- # kill -0 1319869 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1674 -- # set_test_storage 2147483648 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@322 -- # [[ -v testdir ]] 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@324 -- # local requested_size=2147483648 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@325 -- # local mount target_dir 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@327 -- # local -A mounts fss sizes avails uses 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@328 -- # local source fs size avail mount use 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@330 -- # local storage_fallback storage_candidates 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # mktemp -udt spdk.XXXXXX 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@332 -- # storage_fallback=/tmp/spdk.PVzKHs 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@349 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.PVzKHs/tests/target /tmp/spdk.PVzKHs 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@352 -- # requested_size=2214592512 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@321 -- # df -T 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@321 -- # grep -v Filesystem 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_devtmpfs 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=devtmpfs 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=67108864 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=67108864 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=0 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.215 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=/dev/pmem0 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=ext2 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=1050284032 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=5284429824 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4234145792 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=spdk_root 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=overlay 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=189605781504 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=195974311936 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=6368530432 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97931517952 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987153920 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=55635968 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=39185481728 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=39194865664 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=9383936 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=97985998848 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=97987158016 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=1159168 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # mounts["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@355 -- # fss["$mount"]=tmpfs 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # avails["$mount"]=19597426688 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@356 -- # sizes["$mount"]=19597430784 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@357 -- # uses["$mount"]=4096 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@354 -- # read -r source fs size use avail _ mount 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@360 -- # printf '* Looking for test storage...\n' 00:07:50.216 * Looking for test storage... 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@362 -- # local target_space new_size 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@363 -- # for target_dir in "${storage_candidates[@]}" 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@366 -- # mount=/ 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@368 -- # target_space=189605781504 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@369 -- # (( target_space == 0 || target_space < requested_size )) 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@372 -- # (( target_space >= requested_size )) 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == tmpfs ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ overlay == ramfs ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@374 -- # [[ / == / ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@375 -- # new_size=8583122944 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@376 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@381 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@382 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.216 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@383 -- # return 0 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set -o errtrace 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1677 -- # shopt -s extdebug 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1678 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1680 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1681 -- # true 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1683 -- # xtrace_fd 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.216 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.217 23:33:38 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:55.490 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:55.490 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:55.490 Found net devices under 0000:da:00.0: mlx_0_0 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:55.490 Found net devices under 0000:da:00.1: mlx_0_1 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.490 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:55.491 23:33:43 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:55.491 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.491 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:07:55.491 altname enp218s0f0np0 00:07:55.491 altname ens818f0np0 00:07:55.491 inet 192.168.100.8/24 scope global mlx_0_0 00:07:55.491 valid_lft forever preferred_lft forever 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:55.491 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:55.491 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:07:55.491 altname enp218s0f1np1 00:07:55.491 altname ens818f1np1 00:07:55.491 inet 192.168.100.9/24 scope global mlx_0_1 00:07:55.491 valid_lft forever preferred_lft forever 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:55.491 192.168.100.9' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:55.491 192.168.100.9' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:55.491 192.168.100.9' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:55.491 ************************************ 00:07:55.491 START TEST nvmf_filesystem_no_in_capsule 00:07:55.491 ************************************ 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1322921 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1322921 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1322921 ']' 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:07:55.491 23:33:44 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:55.491 [2024-07-15 23:33:44.207513] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:07:55.491 [2024-07-15 23:33:44.207560] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.491 [2024-07-15 23:33:44.263547] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:55.491 [2024-07-15 23:33:44.342175] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.491 [2024-07-15 23:33:44.342211] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.491 [2024-07-15 23:33:44.342217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.491 [2024-07-15 23:33:44.342223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.491 [2024-07-15 23:33:44.342227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.491 [2024-07-15 23:33:44.342286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.491 [2024-07-15 23:33:44.342378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.491 [2024-07-15 23:33:44.342463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:55.491 [2024-07-15 23:33:44.342464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.058 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:07:56.058 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:07:56.058 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.058 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.058 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.317 [2024-07-15 23:33:45.059423] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:07:56.317 [2024-07-15 23:33:45.079658] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x111dcc0/0x11221b0) succeed. 00:07:56.317 [2024-07-15 23:33:45.088819] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x111f300/0x1163840) succeed. 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.317 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 Malloc1 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 [2024-07-15 23:33:45.330177] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:07:56.576 { 00:07:56.576 "name": "Malloc1", 00:07:56.576 "aliases": [ 00:07:56.576 "10dc499c-6a95-40df-abde-8d0b8f807a06" 00:07:56.576 ], 00:07:56.576 "product_name": "Malloc disk", 00:07:56.576 "block_size": 512, 00:07:56.576 "num_blocks": 1048576, 00:07:56.576 "uuid": "10dc499c-6a95-40df-abde-8d0b8f807a06", 00:07:56.576 "assigned_rate_limits": { 00:07:56.576 "rw_ios_per_sec": 0, 00:07:56.576 "rw_mbytes_per_sec": 0, 00:07:56.576 "r_mbytes_per_sec": 0, 00:07:56.576 "w_mbytes_per_sec": 0 00:07:56.576 }, 00:07:56.576 "claimed": true, 00:07:56.576 "claim_type": "exclusive_write", 00:07:56.576 "zoned": false, 00:07:56.576 "supported_io_types": { 00:07:56.576 "read": true, 00:07:56.576 "write": true, 00:07:56.576 "unmap": true, 00:07:56.576 "flush": true, 00:07:56.576 "reset": true, 00:07:56.576 "nvme_admin": false, 00:07:56.576 "nvme_io": false, 00:07:56.576 "nvme_io_md": false, 00:07:56.576 "write_zeroes": true, 00:07:56.576 "zcopy": true, 00:07:56.576 "get_zone_info": false, 00:07:56.576 "zone_management": false, 00:07:56.576 "zone_append": false, 00:07:56.576 "compare": false, 00:07:56.576 "compare_and_write": false, 00:07:56.576 "abort": true, 00:07:56.576 "seek_hole": false, 00:07:56.576 "seek_data": false, 00:07:56.576 "copy": true, 00:07:56.576 "nvme_iov_md": false 00:07:56.576 }, 00:07:56.576 "memory_domains": [ 00:07:56.576 { 00:07:56.576 "dma_device_id": "system", 00:07:56.576 "dma_device_type": 1 00:07:56.576 }, 00:07:56.576 { 00:07:56.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.576 "dma_device_type": 2 00:07:56.576 } 00:07:56.576 ], 00:07:56.576 "driver_specific": {} 00:07:56.576 } 00:07:56.576 ]' 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:56.576 23:33:45 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:07:57.511 23:33:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:57.511 23:33:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:07:57.511 23:33:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:07:57.511 23:33:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:07:57.511 23:33:46 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:00.042 23:33:48 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.991 ************************************ 00:08:00.991 START TEST filesystem_ext4 00:08:00.991 ************************************ 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@921 -- # local force 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:00.991 mke2fs 1.46.5 (30-Dec-2021) 00:08:00.991 Discarding device blocks: 0/522240 done 00:08:00.991 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:00.991 Filesystem UUID: 64b06527-6b78-46c6-bb91-af71732c4eb3 00:08:00.991 Superblock backups stored on blocks: 00:08:00.991 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:00.991 00:08:00.991 Allocating group tables: 0/64 done 00:08:00.991 Writing inode tables: 0/64 done 00:08:00.991 Creating journal (8192 blocks): done 00:08:00.991 Writing superblocks and filesystem accounting information: 0/64 done 00:08:00.991 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # return 0 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1322921 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:00.991 00:08:00.991 real 0m0.178s 00:08:00.991 user 0m0.040s 00:08:00.991 sys 0m0.046s 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:00.991 ************************************ 00:08:00.991 END TEST filesystem_ext4 00:08:00.991 ************************************ 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.991 ************************************ 00:08:00.991 START TEST filesystem_btrfs 00:08:00.991 ************************************ 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@921 -- # local force 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:08:00.991 23:33:49 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:01.249 btrfs-progs v6.6.2 00:08:01.249 See https://btrfs.readthedocs.io for more information. 00:08:01.249 00:08:01.249 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:01.249 NOTE: several default settings have changed in version 5.15, please make sure 00:08:01.249 this does not affect your deployments: 00:08:01.249 - DUP for metadata (-m dup) 00:08:01.249 - enabled no-holes (-O no-holes) 00:08:01.249 - enabled free-space-tree (-R free-space-tree) 00:08:01.249 00:08:01.249 Label: (null) 00:08:01.249 UUID: 54a69b80-0683-4b55-ada7-d4bf2598668c 00:08:01.249 Node size: 16384 00:08:01.249 Sector size: 4096 00:08:01.249 Filesystem size: 510.00MiB 00:08:01.249 Block group profiles: 00:08:01.249 Data: single 8.00MiB 00:08:01.249 Metadata: DUP 32.00MiB 00:08:01.249 System: DUP 8.00MiB 00:08:01.249 SSD detected: yes 00:08:01.249 Zoned device: no 00:08:01.249 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:01.249 Runtime features: free-space-tree 00:08:01.249 Checksum: crc32c 00:08:01.249 Number of devices: 1 00:08:01.249 Devices: 00:08:01.249 ID SIZE PATH 00:08:01.249 1 510.00MiB /dev/nvme0n1p1 00:08:01.249 00:08:01.249 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # return 0 00:08:01.249 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.249 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.249 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:01.249 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1322921 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.250 00:08:01.250 real 0m0.251s 00:08:01.250 user 0m0.027s 00:08:01.250 sys 0m0.119s 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:01.250 ************************************ 00:08:01.250 END TEST filesystem_btrfs 00:08:01.250 ************************************ 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:01.250 ************************************ 00:08:01.250 START TEST filesystem_xfs 00:08:01.250 ************************************ 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@920 -- # local i=0 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@921 -- # local force 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # force=-f 00:08:01.250 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:01.507 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:01.507 = sectsz=512 attr=2, projid32bit=1 00:08:01.507 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:01.507 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:01.507 data = bsize=4096 blocks=130560, imaxpct=25 00:08:01.507 = sunit=0 swidth=0 blks 00:08:01.507 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:01.507 log =internal log bsize=4096 blocks=16384, version=2 00:08:01.507 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:01.507 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:01.508 Discarding blocks...Done. 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # return 0 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1322921 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:01.508 00:08:01.508 real 0m0.184s 00:08:01.508 user 0m0.021s 00:08:01.508 sys 0m0.063s 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:01.508 ************************************ 00:08:01.508 END TEST filesystem_xfs 00:08:01.508 ************************************ 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:01.508 23:33:50 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:02.444 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1322921 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1322921 ']' 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1322921 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # uname 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1322921 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1322921' 00:08:02.703 killing process with pid 1322921 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@961 -- # kill 1322921 00:08:02.703 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # wait 1322921 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.963 00:08:02.963 real 0m7.727s 00:08:02.963 user 0m30.168s 00:08:02.963 sys 0m1.018s 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.963 ************************************ 00:08:02.963 END TEST nvmf_filesystem_no_in_capsule 00:08:02.963 ************************************ 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:02.963 23:33:51 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:03.222 ************************************ 00:08:03.222 START TEST nvmf_filesystem_in_capsule 00:08:03.222 ************************************ 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1117 -- # nvmf_filesystem_part 4096 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1324369 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1324369 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@823 -- # '[' -z 1324369 ']' 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:03.222 23:33:51 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.222 [2024-07-15 23:33:52.001411] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:03.222 [2024-07-15 23:33:52.001450] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.222 [2024-07-15 23:33:52.057144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.222 [2024-07-15 23:33:52.140281] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.222 [2024-07-15 23:33:52.140317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.223 [2024-07-15 23:33:52.140323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.223 [2024-07-15 23:33:52.140329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.223 [2024-07-15 23:33:52.140334] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.223 [2024-07-15 23:33:52.140375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.223 [2024-07-15 23:33:52.140470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.223 [2024-07-15 23:33:52.140565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.223 [2024-07-15 23:33:52.140567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # return 0 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.160 [2024-07-15 23:33:52.875729] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x578cc0/0x57d1b0) succeed. 00:08:04.160 [2024-07-15 23:33:52.884924] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x57a300/0x5be840) succeed. 00:08:04.160 23:33:52 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.160 Malloc1 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.160 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.420 [2024-07-15 23:33:53.148744] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1372 -- # local bdev_name=Malloc1 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1373 -- # local bdev_info 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bs 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local nb 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # bdev_info='[ 00:08:04.420 { 00:08:04.420 "name": "Malloc1", 00:08:04.420 "aliases": [ 00:08:04.420 "3729ed4c-13ab-4925-becd-dd87f0c21009" 00:08:04.420 ], 00:08:04.420 "product_name": "Malloc disk", 00:08:04.420 "block_size": 512, 00:08:04.420 "num_blocks": 1048576, 00:08:04.420 "uuid": "3729ed4c-13ab-4925-becd-dd87f0c21009", 00:08:04.420 "assigned_rate_limits": { 00:08:04.420 "rw_ios_per_sec": 0, 00:08:04.420 "rw_mbytes_per_sec": 0, 00:08:04.420 "r_mbytes_per_sec": 0, 00:08:04.420 "w_mbytes_per_sec": 0 00:08:04.420 }, 00:08:04.420 "claimed": true, 00:08:04.420 "claim_type": "exclusive_write", 00:08:04.420 "zoned": false, 00:08:04.420 "supported_io_types": { 00:08:04.420 "read": true, 00:08:04.420 "write": true, 00:08:04.420 "unmap": true, 00:08:04.420 "flush": true, 00:08:04.420 "reset": true, 00:08:04.420 "nvme_admin": false, 00:08:04.420 "nvme_io": false, 00:08:04.420 "nvme_io_md": false, 00:08:04.420 "write_zeroes": true, 00:08:04.420 "zcopy": true, 00:08:04.420 "get_zone_info": false, 00:08:04.420 "zone_management": false, 00:08:04.420 "zone_append": false, 00:08:04.420 "compare": false, 00:08:04.420 "compare_and_write": false, 00:08:04.420 "abort": true, 00:08:04.420 "seek_hole": false, 00:08:04.420 "seek_data": false, 00:08:04.420 "copy": true, 00:08:04.420 "nvme_iov_md": false 00:08:04.420 }, 00:08:04.420 "memory_domains": [ 00:08:04.420 { 00:08:04.420 "dma_device_id": "system", 00:08:04.420 "dma_device_type": 1 00:08:04.420 }, 00:08:04.420 { 00:08:04.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.420 "dma_device_type": 2 00:08:04.420 } 00:08:04.420 ], 00:08:04.420 "driver_specific": {} 00:08:04.420 } 00:08:04.420 ]' 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # jq '.[] .block_size' 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # bs=512 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # jq '.[] .num_blocks' 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # nb=1048576 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bdev_size=512 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # echo 512 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:04.420 23:33:53 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:05.356 23:33:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:05.356 23:33:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1192 -- # local i=0 00:08:05.356 23:33:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:08:05.356 23:33:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:08:05.356 23:33:54 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # sleep 2 00:08:07.262 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:08:07.262 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:08:07.262 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # return 0 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:07.521 23:33:56 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.459 ************************************ 00:08:08.459 START TEST filesystem_in_capsule_ext4 00:08:08.459 ************************************ 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@918 -- # local fstype=ext4 00:08:08.459 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@920 -- # local i=0 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@921 -- # local force 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # '[' ext4 = ext4 ']' 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # force=-F 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:08.719 mke2fs 1.46.5 (30-Dec-2021) 00:08:08.719 Discarding device blocks: 0/522240 done 00:08:08.719 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.719 Filesystem UUID: 7a6707fa-8e8b-412e-b1d1-0d42f454c19f 00:08:08.719 Superblock backups stored on blocks: 00:08:08.719 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.719 00:08:08.719 Allocating group tables: 0/64 done 00:08:08.719 Writing inode tables: 0/64 done 00:08:08.719 Creating journal (8192 blocks): done 00:08:08.719 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.719 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # return 0 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1324369 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.719 00:08:08.719 real 0m0.173s 00:08:08.719 user 0m0.019s 00:08:08.719 sys 0m0.068s 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:08.719 ************************************ 00:08:08.719 END TEST filesystem_in_capsule_ext4 00:08:08.719 ************************************ 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:08.719 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.720 ************************************ 00:08:08.720 START TEST filesystem_in_capsule_btrfs 00:08:08.720 ************************************ 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@918 -- # local fstype=btrfs 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@920 -- # local i=0 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@921 -- # local force 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # '[' btrfs = ext4 ']' 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # force=-f 00:08:08.720 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:08.981 btrfs-progs v6.6.2 00:08:08.981 See https://btrfs.readthedocs.io for more information. 00:08:08.981 00:08:08.981 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:08.981 NOTE: several default settings have changed in version 5.15, please make sure 00:08:08.981 this does not affect your deployments: 00:08:08.981 - DUP for metadata (-m dup) 00:08:08.981 - enabled no-holes (-O no-holes) 00:08:08.981 - enabled free-space-tree (-R free-space-tree) 00:08:08.981 00:08:08.981 Label: (null) 00:08:08.981 UUID: be9922f8-76eb-4c48-a09a-ccf2294c4bfe 00:08:08.981 Node size: 16384 00:08:08.981 Sector size: 4096 00:08:08.981 Filesystem size: 510.00MiB 00:08:08.981 Block group profiles: 00:08:08.981 Data: single 8.00MiB 00:08:08.981 Metadata: DUP 32.00MiB 00:08:08.981 System: DUP 8.00MiB 00:08:08.981 SSD detected: yes 00:08:08.981 Zoned device: no 00:08:08.981 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:08.981 Runtime features: free-space-tree 00:08:08.981 Checksum: crc32c 00:08:08.981 Number of devices: 1 00:08:08.981 Devices: 00:08:08.981 ID SIZE PATH 00:08:08.981 1 510.00MiB /dev/nvme0n1p1 00:08:08.981 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # return 0 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1324369 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.981 00:08:08.981 real 0m0.238s 00:08:08.981 user 0m0.017s 00:08:08.981 sys 0m0.128s 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:08.981 ************************************ 00:08:08.981 END TEST filesystem_in_capsule_btrfs 00:08:08.981 ************************************ 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1093 -- # '[' 4 -le 1 ']' 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:08.981 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.266 ************************************ 00:08:09.266 START TEST filesystem_in_capsule_xfs 00:08:09.266 ************************************ 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1117 -- # nvmf_filesystem_create xfs nvme0n1 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@918 -- # local fstype=xfs 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@919 -- # local dev_name=/dev/nvme0n1p1 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@920 -- # local i=0 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@921 -- # local force 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # '[' xfs = ext4 ']' 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # force=-f 00:08:09.266 23:33:57 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:09.266 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:09.266 = sectsz=512 attr=2, projid32bit=1 00:08:09.266 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:09.266 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:09.266 data = bsize=4096 blocks=130560, imaxpct=25 00:08:09.266 = sunit=0 swidth=0 blks 00:08:09.266 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:09.266 log =internal log bsize=4096 blocks=16384, version=2 00:08:09.266 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:09.266 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:09.266 Discarding blocks...Done. 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # return 0 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1324369 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.266 00:08:09.266 real 0m0.192s 00:08:09.266 user 0m0.024s 00:08:09.266 sys 0m0.068s 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:09.266 ************************************ 00:08:09.266 END TEST filesystem_in_capsule_xfs 00:08:09.266 ************************************ 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1136 -- # return 0 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.266 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:09.566 23:33:58 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:10.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1213 -- # local i=0 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # return 0 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1324369 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@942 -- # '[' -z 1324369 ']' 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # kill -0 1324369 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # uname 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1324369 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1324369' 00:08:10.502 killing process with pid 1324369 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@961 -- # kill 1324369 00:08:10.502 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # wait 1324369 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.761 00:08:10.761 real 0m7.735s 00:08:10.761 user 0m30.133s 00:08:10.761 sys 0m1.068s 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.761 ************************************ 00:08:10.761 END TEST nvmf_filesystem_in_capsule 00:08:10.761 ************************************ 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1136 -- # return 0 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:10.761 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:10.761 rmmod nvme_rdma 00:08:11.019 rmmod nvme_fabrics 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:11.019 00:08:11.019 real 0m21.374s 00:08:11.019 user 1m2.029s 00:08:11.019 sys 0m6.424s 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:11.019 23:33:59 nvmf_rdma.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:11.019 ************************************ 00:08:11.019 END TEST nvmf_filesystem 00:08:11.019 ************************************ 00:08:11.019 23:33:59 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:08:11.019 23:33:59 nvmf_rdma -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:11.019 23:33:59 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:11.019 23:33:59 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:11.019 23:33:59 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:11.019 ************************************ 00:08:11.019 START TEST nvmf_target_discovery 00:08:11.019 ************************************ 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:08:11.019 * Looking for test storage... 00:08:11.019 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.019 23:33:59 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:11.020 23:33:59 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:16.284 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:16.284 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:16.284 Found net devices under 0000:da:00.0: mlx_0_0 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:16.284 Found net devices under 0000:da:00.1: mlx_0_1 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:16.284 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:16.285 23:34:04 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:16.285 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.285 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:16.285 altname enp218s0f0np0 00:08:16.285 altname ens818f0np0 00:08:16.285 inet 192.168.100.8/24 scope global mlx_0_0 00:08:16.285 valid_lft forever preferred_lft forever 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:16.285 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:16.285 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:16.285 altname enp218s0f1np1 00:08:16.285 altname ens818f1np1 00:08:16.285 inet 192.168.100.9/24 scope global mlx_0_1 00:08:16.285 valid_lft forever preferred_lft forever 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:16.285 192.168.100.9' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:16.285 192.168.100.9' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:16.285 192.168.100.9' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1328957 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1328957 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@823 -- # '[' -z 1328957 ']' 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:16.285 23:34:05 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:16.285 [2024-07-15 23:34:05.214564] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:16.285 [2024-07-15 23:34:05.214616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.544 [2024-07-15 23:34:05.271312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.544 [2024-07-15 23:34:05.350251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.544 [2024-07-15 23:34:05.350290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.544 [2024-07-15 23:34:05.350296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.544 [2024-07-15 23:34:05.350302] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.544 [2024-07-15 23:34:05.350306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.544 [2024-07-15 23:34:05.350397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.544 [2024-07-15 23:34:05.350514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.544 [2024-07-15 23:34:05.350626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.544 [2024-07-15 23:34:05.350627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@856 -- # return 0 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.111 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 [2024-07-15 23:34:06.097681] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x12c3cc0/0x12c81b0) succeed. 00:08:17.370 [2024-07-15 23:34:06.106861] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x12c5300/0x1309840) succeed. 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 Null1 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 [2024-07-15 23:34:06.266309] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 Null2 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 Null3 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.370 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 Null4 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.371 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.629 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:08:17.629 00:08:17.629 Discovery Log Number of Records 6, Generation counter 6 00:08:17.629 =====Discovery Log Entry 0====== 00:08:17.629 trtype: rdma 00:08:17.629 adrfam: ipv4 00:08:17.629 subtype: current discovery subsystem 00:08:17.629 treq: not required 00:08:17.629 portid: 0 00:08:17.629 trsvcid: 4420 00:08:17.629 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.629 traddr: 192.168.100.8 00:08:17.629 eflags: explicit discovery connections, duplicate discovery information 00:08:17.629 rdma_prtype: not specified 00:08:17.629 rdma_qptype: connected 00:08:17.629 rdma_cms: rdma-cm 00:08:17.629 rdma_pkey: 0x0000 00:08:17.629 =====Discovery Log Entry 1====== 00:08:17.629 trtype: rdma 00:08:17.629 adrfam: ipv4 00:08:17.629 subtype: nvme subsystem 00:08:17.629 treq: not required 00:08:17.629 portid: 0 00:08:17.629 trsvcid: 4420 00:08:17.629 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:17.629 traddr: 192.168.100.8 00:08:17.629 eflags: none 00:08:17.630 rdma_prtype: not specified 00:08:17.630 rdma_qptype: connected 00:08:17.630 rdma_cms: rdma-cm 00:08:17.630 rdma_pkey: 0x0000 00:08:17.630 =====Discovery Log Entry 2====== 00:08:17.630 trtype: rdma 00:08:17.630 adrfam: ipv4 00:08:17.630 subtype: nvme subsystem 00:08:17.630 treq: not required 00:08:17.630 portid: 0 00:08:17.630 trsvcid: 4420 00:08:17.630 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:17.630 traddr: 192.168.100.8 00:08:17.630 eflags: none 00:08:17.630 rdma_prtype: not specified 00:08:17.630 rdma_qptype: connected 00:08:17.630 rdma_cms: rdma-cm 00:08:17.630 rdma_pkey: 0x0000 00:08:17.630 =====Discovery Log Entry 3====== 00:08:17.630 trtype: rdma 00:08:17.630 adrfam: ipv4 00:08:17.630 subtype: nvme subsystem 00:08:17.630 treq: not required 00:08:17.630 portid: 0 00:08:17.630 trsvcid: 4420 00:08:17.630 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:17.630 traddr: 192.168.100.8 00:08:17.630 eflags: none 00:08:17.630 rdma_prtype: not specified 00:08:17.630 rdma_qptype: connected 00:08:17.630 rdma_cms: rdma-cm 00:08:17.630 rdma_pkey: 0x0000 00:08:17.630 =====Discovery Log Entry 4====== 00:08:17.630 trtype: rdma 00:08:17.630 adrfam: ipv4 00:08:17.630 subtype: nvme subsystem 00:08:17.630 treq: not required 00:08:17.630 portid: 0 00:08:17.630 trsvcid: 4420 00:08:17.630 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:17.630 traddr: 192.168.100.8 00:08:17.630 eflags: none 00:08:17.630 rdma_prtype: not specified 00:08:17.630 rdma_qptype: connected 00:08:17.630 rdma_cms: rdma-cm 00:08:17.630 rdma_pkey: 0x0000 00:08:17.630 =====Discovery Log Entry 5====== 00:08:17.630 trtype: rdma 00:08:17.630 adrfam: ipv4 00:08:17.630 subtype: discovery subsystem referral 00:08:17.630 treq: not required 00:08:17.630 portid: 0 00:08:17.630 trsvcid: 4430 00:08:17.630 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:17.630 traddr: 192.168.100.8 00:08:17.630 eflags: none 00:08:17.630 rdma_prtype: unrecognized 00:08:17.630 rdma_qptype: unrecognized 00:08:17.630 rdma_cms: unrecognized 00:08:17.630 rdma_pkey: 0x0000 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:17.630 Perform nvmf subsystem discovery via RPC 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 [ 00:08:17.630 { 00:08:17.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:17.630 "subtype": "Discovery", 00:08:17.630 "listen_addresses": [ 00:08:17.630 { 00:08:17.630 "trtype": "RDMA", 00:08:17.630 "adrfam": "IPv4", 00:08:17.630 "traddr": "192.168.100.8", 00:08:17.630 "trsvcid": "4420" 00:08:17.630 } 00:08:17.630 ], 00:08:17.630 "allow_any_host": true, 00:08:17.630 "hosts": [] 00:08:17.630 }, 00:08:17.630 { 00:08:17.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.630 "subtype": "NVMe", 00:08:17.630 "listen_addresses": [ 00:08:17.630 { 00:08:17.630 "trtype": "RDMA", 00:08:17.630 "adrfam": "IPv4", 00:08:17.630 "traddr": "192.168.100.8", 00:08:17.630 "trsvcid": "4420" 00:08:17.630 } 00:08:17.630 ], 00:08:17.630 "allow_any_host": true, 00:08:17.630 "hosts": [], 00:08:17.630 "serial_number": "SPDK00000000000001", 00:08:17.630 "model_number": "SPDK bdev Controller", 00:08:17.630 "max_namespaces": 32, 00:08:17.630 "min_cntlid": 1, 00:08:17.630 "max_cntlid": 65519, 00:08:17.630 "namespaces": [ 00:08:17.630 { 00:08:17.630 "nsid": 1, 00:08:17.630 "bdev_name": "Null1", 00:08:17.630 "name": "Null1", 00:08:17.630 "nguid": "E1F275CEE24B4B5BA9921AC2C6DB3C1A", 00:08:17.630 "uuid": "e1f275ce-e24b-4b5b-a992-1ac2c6db3c1a" 00:08:17.630 } 00:08:17.630 ] 00:08:17.630 }, 00:08:17.630 { 00:08:17.630 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:17.630 "subtype": "NVMe", 00:08:17.630 "listen_addresses": [ 00:08:17.630 { 00:08:17.630 "trtype": "RDMA", 00:08:17.630 "adrfam": "IPv4", 00:08:17.630 "traddr": "192.168.100.8", 00:08:17.630 "trsvcid": "4420" 00:08:17.630 } 00:08:17.630 ], 00:08:17.630 "allow_any_host": true, 00:08:17.630 "hosts": [], 00:08:17.630 "serial_number": "SPDK00000000000002", 00:08:17.630 "model_number": "SPDK bdev Controller", 00:08:17.630 "max_namespaces": 32, 00:08:17.630 "min_cntlid": 1, 00:08:17.630 "max_cntlid": 65519, 00:08:17.630 "namespaces": [ 00:08:17.630 { 00:08:17.630 "nsid": 1, 00:08:17.630 "bdev_name": "Null2", 00:08:17.630 "name": "Null2", 00:08:17.630 "nguid": "374D9701B07B42E69332B95370B3CB91", 00:08:17.630 "uuid": "374d9701-b07b-42e6-9332-b95370b3cb91" 00:08:17.630 } 00:08:17.630 ] 00:08:17.630 }, 00:08:17.630 { 00:08:17.630 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:17.630 "subtype": "NVMe", 00:08:17.630 "listen_addresses": [ 00:08:17.630 { 00:08:17.630 "trtype": "RDMA", 00:08:17.630 "adrfam": "IPv4", 00:08:17.630 "traddr": "192.168.100.8", 00:08:17.630 "trsvcid": "4420" 00:08:17.630 } 00:08:17.630 ], 00:08:17.630 "allow_any_host": true, 00:08:17.630 "hosts": [], 00:08:17.630 "serial_number": "SPDK00000000000003", 00:08:17.630 "model_number": "SPDK bdev Controller", 00:08:17.630 "max_namespaces": 32, 00:08:17.630 "min_cntlid": 1, 00:08:17.630 "max_cntlid": 65519, 00:08:17.630 "namespaces": [ 00:08:17.630 { 00:08:17.630 "nsid": 1, 00:08:17.630 "bdev_name": "Null3", 00:08:17.630 "name": "Null3", 00:08:17.630 "nguid": "E424DA3308974B0197472F95EF82AFF0", 00:08:17.630 "uuid": "e424da33-0897-4b01-9747-2f95ef82aff0" 00:08:17.630 } 00:08:17.630 ] 00:08:17.630 }, 00:08:17.630 { 00:08:17.630 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:17.630 "subtype": "NVMe", 00:08:17.630 "listen_addresses": [ 00:08:17.630 { 00:08:17.630 "trtype": "RDMA", 00:08:17.630 "adrfam": "IPv4", 00:08:17.630 "traddr": "192.168.100.8", 00:08:17.630 "trsvcid": "4420" 00:08:17.630 } 00:08:17.630 ], 00:08:17.630 "allow_any_host": true, 00:08:17.630 "hosts": [], 00:08:17.630 "serial_number": "SPDK00000000000004", 00:08:17.630 "model_number": "SPDK bdev Controller", 00:08:17.630 "max_namespaces": 32, 00:08:17.630 "min_cntlid": 1, 00:08:17.630 "max_cntlid": 65519, 00:08:17.630 "namespaces": [ 00:08:17.630 { 00:08:17.630 "nsid": 1, 00:08:17.630 "bdev_name": "Null4", 00:08:17.630 "name": "Null4", 00:08:17.630 "nguid": "B1CEDB06D2D342D284F19FDD6D88C58B", 00:08:17.630 "uuid": "b1cedb06-d2d3-42d2-84f1-9fdd6d88c58b" 00:08:17.630 } 00:08:17.630 ] 00:08:17.630 } 00:08:17.630 ] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.630 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:17.631 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:17.889 rmmod nvme_rdma 00:08:17.889 rmmod nvme_fabrics 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1328957 ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1328957 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@942 -- # '[' -z 1328957 ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@946 -- # kill -0 1328957 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@947 -- # uname 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1328957 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1328957' 00:08:17.889 killing process with pid 1328957 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@961 -- # kill 1328957 00:08:17.889 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@966 -- # wait 1328957 00:08:18.147 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.147 23:34:06 nvmf_rdma.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:18.147 00:08:18.147 real 0m7.146s 00:08:18.147 user 0m8.076s 00:08:18.147 sys 0m4.306s 00:08:18.147 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:18.147 23:34:06 nvmf_rdma.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.147 ************************************ 00:08:18.147 END TEST nvmf_target_discovery 00:08:18.147 ************************************ 00:08:18.147 23:34:07 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:08:18.147 23:34:07 nvmf_rdma -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:18.147 23:34:07 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:18.147 23:34:07 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:18.147 23:34:07 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:18.147 ************************************ 00:08:18.147 START TEST nvmf_referrals 00:08:18.147 ************************************ 00:08:18.147 23:34:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:08:18.147 * Looking for test storage... 00:08:18.147 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:18.147 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.147 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:18.406 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:18.407 23:34:07 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:23.680 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:23.680 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:23.680 Found net devices under 0000:da:00.0: mlx_0_0 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:23.680 Found net devices under 0000:da:00.1: mlx_0_1 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:23.680 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:23.681 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.681 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:23.681 altname enp218s0f0np0 00:08:23.681 altname ens818f0np0 00:08:23.681 inet 192.168.100.8/24 scope global mlx_0_0 00:08:23.681 valid_lft forever preferred_lft forever 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:23.681 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:23.681 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:23.681 altname enp218s0f1np1 00:08:23.681 altname ens818f1np1 00:08:23.681 inet 192.168.100.9/24 scope global mlx_0_1 00:08:23.681 valid_lft forever preferred_lft forever 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:23.681 192.168.100.9' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:23.681 192.168.100.9' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:23.681 192.168.100.9' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1332283 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1332283 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@823 -- # '[' -z 1332283 ']' 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.681 23:34:12 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.681 [2024-07-15 23:34:12.385623] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:23.681 [2024-07-15 23:34:12.385668] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.681 [2024-07-15 23:34:12.439994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.681 [2024-07-15 23:34:12.522489] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.681 [2024-07-15 23:34:12.522523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.681 [2024-07-15 23:34:12.522529] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.681 [2024-07-15 23:34:12.522535] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.681 [2024-07-15 23:34:12.522579] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.681 [2024-07-15 23:34:12.522622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.681 [2024-07-15 23:34:12.522638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.681 [2024-07-15 23:34:12.522738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:23.681 [2024-07-15 23:34:12.522739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.249 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:24.249 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@856 -- # return 0 00:08:24.249 23:34:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.249 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.249 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 [2024-07-15 23:34:13.267542] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb16cc0/0xb1b1b0) succeed. 00:08:24.507 [2024-07-15 23:34:13.276802] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb18300/0xb5c840) succeed. 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 [2024-07-15 23:34:13.398230] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.507 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.508 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.508 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.508 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.765 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.766 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.024 23:34:13 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.283 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:25.541 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:25.800 rmmod nvme_rdma 00:08:25.800 rmmod nvme_fabrics 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:25.800 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1332283 ']' 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1332283 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@942 -- # '[' -z 1332283 ']' 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@946 -- # kill -0 1332283 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@947 -- # uname 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1332283 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1332283' 00:08:25.801 killing process with pid 1332283 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@961 -- # kill 1332283 00:08:25.801 23:34:14 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@966 -- # wait 1332283 00:08:26.058 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.058 23:34:14 nvmf_rdma.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:26.058 00:08:26.058 real 0m7.960s 00:08:26.058 user 0m12.177s 00:08:26.058 sys 0m4.546s 00:08:26.058 23:34:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:26.058 23:34:15 nvmf_rdma.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:26.058 ************************************ 00:08:26.058 END TEST nvmf_referrals 00:08:26.058 ************************************ 00:08:26.058 23:34:15 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:08:26.058 23:34:15 nvmf_rdma -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:26.059 23:34:15 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:26.059 23:34:15 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:26.059 23:34:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:26.317 ************************************ 00:08:26.317 START TEST nvmf_connect_disconnect 00:08:26.317 ************************************ 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:08:26.317 * Looking for test storage... 00:08:26.317 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:26.317 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.318 23:34:15 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:31.590 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.590 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:31.591 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:31.591 Found net devices under 0000:da:00.0: mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:31.591 Found net devices under 0000:da:00.1: mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:31.591 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.591 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:31.591 altname enp218s0f0np0 00:08:31.591 altname ens818f0np0 00:08:31.591 inet 192.168.100.8/24 scope global mlx_0_0 00:08:31.591 valid_lft forever preferred_lft forever 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:31.591 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:31.591 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:31.591 altname enp218s0f1np1 00:08:31.591 altname ens818f1np1 00:08:31.591 inet 192.168.100.9/24 scope global mlx_0_1 00:08:31.591 valid_lft forever preferred_lft forever 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:31.591 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:31.592 192.168.100.9' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:31.592 192.168.100.9' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:31.592 192.168.100.9' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1335888 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1335888 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@823 -- # '[' -z 1335888 ']' 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:31.592 23:34:20 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:31.592 [2024-07-15 23:34:20.323207] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:31.592 [2024-07-15 23:34:20.323252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.592 [2024-07-15 23:34:20.378882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.592 [2024-07-15 23:34:20.453448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.592 [2024-07-15 23:34:20.453486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.592 [2024-07-15 23:34:20.453492] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.592 [2024-07-15 23:34:20.453497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.592 [2024-07-15 23:34:20.453502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.592 [2024-07-15 23:34:20.453594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.592 [2024-07-15 23:34:20.453637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.592 [2024-07-15 23:34:20.453723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.592 [2024-07-15 23:34:20.453724] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # return 0 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.179 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.438 [2024-07-15 23:34:21.163666] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:32.438 [2024-07-15 23:34:21.183716] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x19a1cc0/0x19a61b0) succeed. 00:08:32.438 [2024-07-15 23:34:21.192776] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x19a3300/0x19e7840) succeed. 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@553 -- # xtrace_disable 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:32.438 [2024-07-15 23:34:21.332099] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:32.438 23:34:21 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:36.627 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.483 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:08:52.483 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:08:52.483 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:52.484 rmmod nvme_rdma 00:08:52.484 rmmod nvme_fabrics 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1335888 ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1335888 ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # kill -0 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # uname 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1335888' 00:08:52.484 killing process with pid 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@961 -- # kill 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # wait 1335888 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:52.484 00:08:52.484 real 0m26.326s 00:08:52.484 user 1m24.578s 00:08:52.484 sys 0m4.800s 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:52.484 23:34:41 nvmf_rdma.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:52.484 ************************************ 00:08:52.484 END TEST nvmf_connect_disconnect 00:08:52.484 ************************************ 00:08:52.484 23:34:41 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:08:52.484 23:34:41 nvmf_rdma -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:52.484 23:34:41 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:52.484 23:34:41 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:52.484 23:34:41 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:52.484 ************************************ 00:08:52.484 START TEST nvmf_multitarget 00:08:52.484 ************************************ 00:08:52.484 23:34:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:08:52.743 * Looking for test storage... 00:08:52.743 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.743 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.744 23:34:41 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:58.013 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:58.014 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:58.014 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:58.014 Found net devices under 0000:da:00.0: mlx_0_0 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:58.014 Found net devices under 0000:da:00.1: mlx_0_1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:58.014 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.014 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:08:58.014 altname enp218s0f0np0 00:08:58.014 altname ens818f0np0 00:08:58.014 inet 192.168.100.8/24 scope global mlx_0_0 00:08:58.014 valid_lft forever preferred_lft forever 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:58.014 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:58.014 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.014 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:08:58.014 altname enp218s0f1np1 00:08:58.014 altname ens818f1np1 00:08:58.015 inet 192.168.100.9/24 scope global mlx_0_1 00:08:58.015 valid_lft forever preferred_lft forever 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:58.015 192.168.100.9' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:58.015 192.168.100.9' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:58.015 192.168.100.9' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1342305 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1342305 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@823 -- # '[' -z 1342305 ']' 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@828 -- # local max_retries=100 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@832 -- # xtrace_disable 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.015 23:34:46 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:58.015 [2024-07-15 23:34:46.284425] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:08:58.015 [2024-07-15 23:34:46.284473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.015 [2024-07-15 23:34:46.337815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:58.015 [2024-07-15 23:34:46.420347] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.015 [2024-07-15 23:34:46.420381] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.015 [2024-07-15 23:34:46.420388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.015 [2024-07-15 23:34:46.420394] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.015 [2024-07-15 23:34:46.420399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.015 [2024-07-15 23:34:46.420438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:58.015 [2024-07-15 23:34:46.420454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:58.015 [2024-07-15 23:34:46.420557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.015 [2024-07-15 23:34:46.420558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@856 -- # return 0 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.299 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:08:58.300 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:08:58.300 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:08:58.557 "nvmf_tgt_1" 00:08:58.557 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:08:58.557 "nvmf_tgt_2" 00:08:58.557 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.557 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:08:58.557 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:08:58.557 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:08:58.814 true 00:08:58.814 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:08:58.814 true 00:08:58.814 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:08:58.814 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:59.080 rmmod nvme_rdma 00:08:59.080 rmmod nvme_fabrics 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1342305 ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1342305 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@942 -- # '[' -z 1342305 ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@946 -- # kill -0 1342305 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@947 -- # uname 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1342305 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1342305' 00:08:59.080 killing process with pid 1342305 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@961 -- # kill 1342305 00:08:59.080 23:34:47 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@966 -- # wait 1342305 00:08:59.449 23:34:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.449 23:34:48 nvmf_rdma.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:59.449 00:08:59.449 real 0m6.648s 00:08:59.449 user 0m8.687s 00:08:59.449 sys 0m3.911s 00:08:59.449 23:34:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@1118 -- # xtrace_disable 00:08:59.449 23:34:48 nvmf_rdma.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:08:59.449 ************************************ 00:08:59.449 END TEST nvmf_multitarget 00:08:59.449 ************************************ 00:08:59.449 23:34:48 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:08:59.449 23:34:48 nvmf_rdma -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:59.449 23:34:48 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:08:59.449 23:34:48 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:08:59.449 23:34:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:08:59.449 ************************************ 00:08:59.449 START TEST nvmf_rpc 00:08:59.449 ************************************ 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:08:59.449 * Looking for test storage... 00:08:59.449 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.449 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:08:59.450 23:34:48 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:04.753 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:04.753 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:04.753 Found net devices under 0000:da:00.0: mlx_0_0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:04.753 Found net devices under 0000:da:00.1: mlx_0_1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:04.753 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.753 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:04.753 altname enp218s0f0np0 00:09:04.753 altname ens818f0np0 00:09:04.753 inet 192.168.100.8/24 scope global mlx_0_0 00:09:04.753 valid_lft forever preferred_lft forever 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:04.753 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:04.753 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:04.753 altname enp218s0f1np1 00:09:04.753 altname ens818f1np1 00:09:04.753 inet 192.168.100.9/24 scope global mlx_0_1 00:09:04.753 valid_lft forever preferred_lft forever 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:04.753 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:04.754 192.168.100.9' 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:04.754 192.168.100.9' 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:04.754 192.168.100.9' 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:09:04.754 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1345839 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1345839 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@823 -- # '[' -z 1345839 ']' 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.013 23:34:53 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.013 [2024-07-15 23:34:53.805196] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:05.013 [2024-07-15 23:34:53.805243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.013 [2024-07-15 23:34:53.860443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.013 [2024-07-15 23:34:53.943251] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.013 [2024-07-15 23:34:53.943286] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.013 [2024-07-15 23:34:53.943293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.013 [2024-07-15 23:34:53.943299] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.013 [2024-07-15 23:34:53.943304] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.013 [2024-07-15 23:34:53.943344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.013 [2024-07-15 23:34:53.943358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.013 [2024-07-15 23:34:53.943450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.013 [2024-07-15 23:34:53.943451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@856 -- # return 0 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:05.949 "tick_rate": 2100000000, 00:09:05.949 "poll_groups": [ 00:09:05.949 { 00:09:05.949 "name": "nvmf_tgt_poll_group_000", 00:09:05.949 "admin_qpairs": 0, 00:09:05.949 "io_qpairs": 0, 00:09:05.949 "current_admin_qpairs": 0, 00:09:05.949 "current_io_qpairs": 0, 00:09:05.949 "pending_bdev_io": 0, 00:09:05.949 "completed_nvme_io": 0, 00:09:05.949 "transports": [] 00:09:05.949 }, 00:09:05.949 { 00:09:05.949 "name": "nvmf_tgt_poll_group_001", 00:09:05.949 "admin_qpairs": 0, 00:09:05.949 "io_qpairs": 0, 00:09:05.949 "current_admin_qpairs": 0, 00:09:05.949 "current_io_qpairs": 0, 00:09:05.949 "pending_bdev_io": 0, 00:09:05.949 "completed_nvme_io": 0, 00:09:05.949 "transports": [] 00:09:05.949 }, 00:09:05.949 { 00:09:05.949 "name": "nvmf_tgt_poll_group_002", 00:09:05.949 "admin_qpairs": 0, 00:09:05.949 "io_qpairs": 0, 00:09:05.949 "current_admin_qpairs": 0, 00:09:05.949 "current_io_qpairs": 0, 00:09:05.949 "pending_bdev_io": 0, 00:09:05.949 "completed_nvme_io": 0, 00:09:05.949 "transports": [] 00:09:05.949 }, 00:09:05.949 { 00:09:05.949 "name": "nvmf_tgt_poll_group_003", 00:09:05.949 "admin_qpairs": 0, 00:09:05.949 "io_qpairs": 0, 00:09:05.949 "current_admin_qpairs": 0, 00:09:05.949 "current_io_qpairs": 0, 00:09:05.949 "pending_bdev_io": 0, 00:09:05.949 "completed_nvme_io": 0, 00:09:05.949 "transports": [] 00:09:05.949 } 00:09:05.949 ] 00:09:05.949 }' 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:05.949 [2024-07-15 23:34:54.782873] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x847cd0/0x84c1c0) succeed. 00:09:05.949 [2024-07-15 23:34:54.792142] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x849310/0x88d850) succeed. 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:05.949 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:05.950 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:05.950 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.209 23:34:54 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.209 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:06.209 "tick_rate": 2100000000, 00:09:06.209 "poll_groups": [ 00:09:06.209 { 00:09:06.209 "name": "nvmf_tgt_poll_group_000", 00:09:06.209 "admin_qpairs": 0, 00:09:06.209 "io_qpairs": 0, 00:09:06.209 "current_admin_qpairs": 0, 00:09:06.209 "current_io_qpairs": 0, 00:09:06.209 "pending_bdev_io": 0, 00:09:06.209 "completed_nvme_io": 0, 00:09:06.209 "transports": [ 00:09:06.209 { 00:09:06.209 "trtype": "RDMA", 00:09:06.209 "pending_data_buffer": 0, 00:09:06.209 "devices": [ 00:09:06.209 { 00:09:06.209 "name": "mlx5_0", 00:09:06.209 "polls": 15334, 00:09:06.209 "idle_polls": 15334, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "mlx5_1", 00:09:06.209 "polls": 15334, 00:09:06.209 "idle_polls": 15334, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "nvmf_tgt_poll_group_001", 00:09:06.209 "admin_qpairs": 0, 00:09:06.209 "io_qpairs": 0, 00:09:06.209 "current_admin_qpairs": 0, 00:09:06.209 "current_io_qpairs": 0, 00:09:06.209 "pending_bdev_io": 0, 00:09:06.209 "completed_nvme_io": 0, 00:09:06.209 "transports": [ 00:09:06.209 { 00:09:06.209 "trtype": "RDMA", 00:09:06.209 "pending_data_buffer": 0, 00:09:06.209 "devices": [ 00:09:06.209 { 00:09:06.209 "name": "mlx5_0", 00:09:06.209 "polls": 10099, 00:09:06.209 "idle_polls": 10099, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "mlx5_1", 00:09:06.209 "polls": 10099, 00:09:06.209 "idle_polls": 10099, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "nvmf_tgt_poll_group_002", 00:09:06.209 "admin_qpairs": 0, 00:09:06.209 "io_qpairs": 0, 00:09:06.209 "current_admin_qpairs": 0, 00:09:06.209 "current_io_qpairs": 0, 00:09:06.209 "pending_bdev_io": 0, 00:09:06.209 "completed_nvme_io": 0, 00:09:06.209 "transports": [ 00:09:06.209 { 00:09:06.209 "trtype": "RDMA", 00:09:06.209 "pending_data_buffer": 0, 00:09:06.209 "devices": [ 00:09:06.209 { 00:09:06.209 "name": "mlx5_0", 00:09:06.209 "polls": 5408, 00:09:06.209 "idle_polls": 5408, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "mlx5_1", 00:09:06.209 "polls": 5408, 00:09:06.209 "idle_polls": 5408, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.209 "total_recv_wrs": 4096, 00:09:06.209 "recv_doorbell_updates": 1 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 } 00:09:06.209 ] 00:09:06.209 }, 00:09:06.209 { 00:09:06.209 "name": "nvmf_tgt_poll_group_003", 00:09:06.209 "admin_qpairs": 0, 00:09:06.209 "io_qpairs": 0, 00:09:06.209 "current_admin_qpairs": 0, 00:09:06.209 "current_io_qpairs": 0, 00:09:06.209 "pending_bdev_io": 0, 00:09:06.209 "completed_nvme_io": 0, 00:09:06.209 "transports": [ 00:09:06.209 { 00:09:06.209 "trtype": "RDMA", 00:09:06.209 "pending_data_buffer": 0, 00:09:06.209 "devices": [ 00:09:06.209 { 00:09:06.209 "name": "mlx5_0", 00:09:06.209 "polls": 893, 00:09:06.209 "idle_polls": 893, 00:09:06.209 "completions": 0, 00:09:06.209 "requests": 0, 00:09:06.209 "request_latency": 0, 00:09:06.209 "pending_free_request": 0, 00:09:06.209 "pending_rdma_read": 0, 00:09:06.209 "pending_rdma_write": 0, 00:09:06.209 "pending_rdma_send": 0, 00:09:06.209 "total_send_wrs": 0, 00:09:06.209 "send_doorbell_updates": 0, 00:09:06.210 "total_recv_wrs": 4096, 00:09:06.210 "recv_doorbell_updates": 1 00:09:06.210 }, 00:09:06.210 { 00:09:06.210 "name": "mlx5_1", 00:09:06.210 "polls": 893, 00:09:06.210 "idle_polls": 893, 00:09:06.210 "completions": 0, 00:09:06.210 "requests": 0, 00:09:06.210 "request_latency": 0, 00:09:06.210 "pending_free_request": 0, 00:09:06.210 "pending_rdma_read": 0, 00:09:06.210 "pending_rdma_write": 0, 00:09:06.210 "pending_rdma_send": 0, 00:09:06.210 "total_send_wrs": 0, 00:09:06.210 "send_doorbell_updates": 0, 00:09:06.210 "total_recv_wrs": 4096, 00:09:06.210 "recv_doorbell_updates": 1 00:09:06.210 } 00:09:06.210 ] 00:09:06.210 } 00:09:06.210 ] 00:09:06.210 } 00:09:06.210 ] 00:09:06.210 }' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:06.210 23:34:54 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.210 Malloc1 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.210 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.468 [2024-07-15 23:34:55.223504] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:09:06.468 [2024-07-15 23:34:55.265342] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:09:06.468 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:06.468 could not add new controller: failed to write to nvme-fabrics device 00:09:06.468 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:06.469 23:34:55 nvmf_rdma.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:07.403 23:34:56 nvmf_rdma.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:07.403 23:34:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:07.403 23:34:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:07.403 23:34:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:07.403 23:34:56 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:09.305 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:09.563 23:34:58 nvmf_rdma.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:10.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@642 -- # local es=0 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@644 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@630 -- # local arg=nvme 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # type -t nvme 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # type -P nvme 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # arg=/usr/sbin/nvme 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@636 -- # [[ -x /usr/sbin/nvme ]] 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@645 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:10.498 [2024-07-15 23:34:59.327112] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:09:10.498 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:10.498 could not add new controller: failed to write to nvme-fabrics device 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@645 -- # es=1 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:10.498 23:34:59 nvmf_rdma.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:11.457 23:35:00 nvmf_rdma.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.457 23:35:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:11.457 23:35:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.458 23:35:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:11.458 23:35:00 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:13.989 23:35:02 nvmf_rdma.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.555 [2024-07-15 23:35:03.380328] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.555 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:14.556 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:14.556 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:14.556 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.556 23:35:03 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:14.556 23:35:03 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:15.488 23:35:04 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:15.488 23:35:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:15.488 23:35:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.488 23:35:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:15.488 23:35:04 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:17.404 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:17.404 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:17.404 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.663 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:17.663 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.663 23:35:06 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:17.663 23:35:06 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 [2024-07-15 23:35:07.413889] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:18.596 23:35:07 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:19.528 23:35:08 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:19.528 23:35:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:19.528 23:35:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:19.528 23:35:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:19.528 23:35:08 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:21.424 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:21.424 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:21.424 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:21.682 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:21.682 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:21.682 23:35:10 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:21.682 23:35:10 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 [2024-07-15 23:35:11.412643] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:22.618 23:35:11 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:23.553 23:35:12 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:23.553 23:35:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:23.553 23:35:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:23.553 23:35:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:23.553 23:35:12 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:25.464 23:35:14 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:26.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:26.400 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:26.658 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:26.658 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:26.658 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.658 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.658 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.659 [2024-07-15 23:35:15.421059] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:26.659 23:35:15 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:27.595 23:35:16 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:27.595 23:35:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:27.595 23:35:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:27.595 23:35:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:27.595 23:35:16 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:29.499 23:35:18 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:30.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.434 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.435 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.694 [2024-07-15 23:35:19.428292] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:30.694 23:35:19 nvmf_rdma.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:31.629 23:35:20 nvmf_rdma.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:31.629 23:35:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1192 -- # local i=0 00:09:31.629 23:35:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:09:31.629 23:35:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:09:31.629 23:35:20 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1199 -- # sleep 2 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1202 -- # return 0 00:09:33.531 23:35:22 nvmf_rdma.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:34.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1213 -- # local i=0 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1225 -- # return 0 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.466 [2024-07-15 23:35:23.436163] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.466 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 [2024-07-15 23:35:23.484312] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 [2024-07-15 23:35:23.536514] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 [2024-07-15 23:35:23.584695] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 [2024-07-15 23:35:23.632828] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.725 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:34.726 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:34.726 "tick_rate": 2100000000, 00:09:34.726 "poll_groups": [ 00:09:34.726 { 00:09:34.726 "name": "nvmf_tgt_poll_group_000", 00:09:34.726 "admin_qpairs": 2, 00:09:34.726 "io_qpairs": 27, 00:09:34.726 "current_admin_qpairs": 0, 00:09:34.726 "current_io_qpairs": 0, 00:09:34.726 "pending_bdev_io": 0, 00:09:34.726 "completed_nvme_io": 127, 00:09:34.726 "transports": [ 00:09:34.726 { 00:09:34.726 "trtype": "RDMA", 00:09:34.726 "pending_data_buffer": 0, 00:09:34.726 "devices": [ 00:09:34.726 { 00:09:34.726 "name": "mlx5_0", 00:09:34.726 "polls": 3507338, 00:09:34.726 "idle_polls": 3507015, 00:09:34.726 "completions": 365, 00:09:34.726 "requests": 182, 00:09:34.726 "request_latency": 30439292, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 309, 00:09:34.726 "send_doorbell_updates": 159, 00:09:34.726 "total_recv_wrs": 4278, 00:09:34.726 "recv_doorbell_updates": 159 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "mlx5_1", 00:09:34.726 "polls": 3507338, 00:09:34.726 "idle_polls": 3507338, 00:09:34.726 "completions": 0, 00:09:34.726 "requests": 0, 00:09:34.726 "request_latency": 0, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 0, 00:09:34.726 "send_doorbell_updates": 0, 00:09:34.726 "total_recv_wrs": 4096, 00:09:34.726 "recv_doorbell_updates": 1 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "nvmf_tgt_poll_group_001", 00:09:34.726 "admin_qpairs": 2, 00:09:34.726 "io_qpairs": 26, 00:09:34.726 "current_admin_qpairs": 0, 00:09:34.726 "current_io_qpairs": 0, 00:09:34.726 "pending_bdev_io": 0, 00:09:34.726 "completed_nvme_io": 76, 00:09:34.726 "transports": [ 00:09:34.726 { 00:09:34.726 "trtype": "RDMA", 00:09:34.726 "pending_data_buffer": 0, 00:09:34.726 "devices": [ 00:09:34.726 { 00:09:34.726 "name": "mlx5_0", 00:09:34.726 "polls": 3607670, 00:09:34.726 "idle_polls": 3607430, 00:09:34.726 "completions": 260, 00:09:34.726 "requests": 130, 00:09:34.726 "request_latency": 19039822, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 206, 00:09:34.726 "send_doorbell_updates": 117, 00:09:34.726 "total_recv_wrs": 4226, 00:09:34.726 "recv_doorbell_updates": 118 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "mlx5_1", 00:09:34.726 "polls": 3607670, 00:09:34.726 "idle_polls": 3607670, 00:09:34.726 "completions": 0, 00:09:34.726 "requests": 0, 00:09:34.726 "request_latency": 0, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 0, 00:09:34.726 "send_doorbell_updates": 0, 00:09:34.726 "total_recv_wrs": 4096, 00:09:34.726 "recv_doorbell_updates": 1 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "nvmf_tgt_poll_group_002", 00:09:34.726 "admin_qpairs": 1, 00:09:34.726 "io_qpairs": 26, 00:09:34.726 "current_admin_qpairs": 0, 00:09:34.726 "current_io_qpairs": 0, 00:09:34.726 "pending_bdev_io": 0, 00:09:34.726 "completed_nvme_io": 126, 00:09:34.726 "transports": [ 00:09:34.726 { 00:09:34.726 "trtype": "RDMA", 00:09:34.726 "pending_data_buffer": 0, 00:09:34.726 "devices": [ 00:09:34.726 { 00:09:34.726 "name": "mlx5_0", 00:09:34.726 "polls": 3550335, 00:09:34.726 "idle_polls": 3550072, 00:09:34.726 "completions": 307, 00:09:34.726 "requests": 153, 00:09:34.726 "request_latency": 28936220, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 266, 00:09:34.726 "send_doorbell_updates": 128, 00:09:34.726 "total_recv_wrs": 4249, 00:09:34.726 "recv_doorbell_updates": 128 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "mlx5_1", 00:09:34.726 "polls": 3550335, 00:09:34.726 "idle_polls": 3550335, 00:09:34.726 "completions": 0, 00:09:34.726 "requests": 0, 00:09:34.726 "request_latency": 0, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 0, 00:09:34.726 "send_doorbell_updates": 0, 00:09:34.726 "total_recv_wrs": 4096, 00:09:34.726 "recv_doorbell_updates": 1 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "nvmf_tgt_poll_group_003", 00:09:34.726 "admin_qpairs": 2, 00:09:34.726 "io_qpairs": 26, 00:09:34.726 "current_admin_qpairs": 0, 00:09:34.726 "current_io_qpairs": 0, 00:09:34.726 "pending_bdev_io": 0, 00:09:34.726 "completed_nvme_io": 126, 00:09:34.726 "transports": [ 00:09:34.726 { 00:09:34.726 "trtype": "RDMA", 00:09:34.726 "pending_data_buffer": 0, 00:09:34.726 "devices": [ 00:09:34.726 { 00:09:34.726 "name": "mlx5_0", 00:09:34.726 "polls": 2769080, 00:09:34.726 "idle_polls": 2768764, 00:09:34.726 "completions": 358, 00:09:34.726 "requests": 179, 00:09:34.726 "request_latency": 32446306, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 304, 00:09:34.726 "send_doorbell_updates": 153, 00:09:34.726 "total_recv_wrs": 4275, 00:09:34.726 "recv_doorbell_updates": 154 00:09:34.726 }, 00:09:34.726 { 00:09:34.726 "name": "mlx5_1", 00:09:34.726 "polls": 2769080, 00:09:34.726 "idle_polls": 2769080, 00:09:34.726 "completions": 0, 00:09:34.726 "requests": 0, 00:09:34.726 "request_latency": 0, 00:09:34.726 "pending_free_request": 0, 00:09:34.726 "pending_rdma_read": 0, 00:09:34.726 "pending_rdma_write": 0, 00:09:34.726 "pending_rdma_send": 0, 00:09:34.726 "total_send_wrs": 0, 00:09:34.726 "send_doorbell_updates": 0, 00:09:34.726 "total_recv_wrs": 4096, 00:09:34.726 "recv_doorbell_updates": 1 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 } 00:09:34.726 ] 00:09:34.726 }' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@118 -- # (( 110861640 > 0 )) 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:34.985 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:34.986 rmmod nvme_rdma 00:09:34.986 rmmod nvme_fabrics 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1345839 ']' 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1345839 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@942 -- # '[' -z 1345839 ']' 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@946 -- # kill -0 1345839 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@947 -- # uname 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:34.986 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1345839 00:09:35.244 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:35.244 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:35.244 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1345839' 00:09:35.244 killing process with pid 1345839 00:09:35.244 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@961 -- # kill 1345839 00:09:35.244 23:35:23 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@966 -- # wait 1345839 00:09:35.503 23:35:24 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:35.503 23:35:24 nvmf_rdma.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:35.503 00:09:35.503 real 0m36.081s 00:09:35.503 user 2m2.592s 00:09:35.503 sys 0m5.611s 00:09:35.503 23:35:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:35.503 23:35:24 nvmf_rdma.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.503 ************************************ 00:09:35.503 END TEST nvmf_rpc 00:09:35.503 ************************************ 00:09:35.503 23:35:24 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:09:35.503 23:35:24 nvmf_rdma -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:35.503 23:35:24 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:35.503 23:35:24 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:35.503 23:35:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:35.503 ************************************ 00:09:35.503 START TEST nvmf_invalid 00:09:35.503 ************************************ 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:09:35.503 * Looking for test storage... 00:09:35.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:35.503 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:09:35.504 23:35:24 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:40.839 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:40.839 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.839 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:40.840 Found net devices under 0000:da:00.0: mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:40.840 Found net devices under 0000:da:00.1: mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:40.840 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.840 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:40.840 altname enp218s0f0np0 00:09:40.840 altname ens818f0np0 00:09:40.840 inet 192.168.100.8/24 scope global mlx_0_0 00:09:40.840 valid_lft forever preferred_lft forever 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:40.840 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:40.840 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:40.840 altname enp218s0f1np1 00:09:40.840 altname ens818f1np1 00:09:40.840 inet 192.168.100.9/24 scope global mlx_0_1 00:09:40.840 valid_lft forever preferred_lft forever 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:40.840 192.168.100.9' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:40.840 192.168.100.9' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:40.840 192.168.100.9' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1354044 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1354044 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@823 -- # '[' -z 1354044 ']' 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:40.840 23:35:29 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:40.840 [2024-07-15 23:35:29.567676] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:40.841 [2024-07-15 23:35:29.567733] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.841 [2024-07-15 23:35:29.624134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.841 [2024-07-15 23:35:29.709100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.841 [2024-07-15 23:35:29.709134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.841 [2024-07-15 23:35:29.709141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.841 [2024-07-15 23:35:29.709147] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.841 [2024-07-15 23:35:29.709152] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.841 [2024-07-15 23:35:29.709200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.841 [2024-07-15 23:35:29.709301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.841 [2024-07-15 23:35:29.709389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.841 [2024-07-15 23:35:29.709390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.408 23:35:30 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:41.408 23:35:30 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@856 -- # return 0 00:09:41.408 23:35:30 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:41.408 23:35:30 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:41.408 23:35:30 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30025 00:09:41.666 [2024-07-15 23:35:30.558977] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:09:41.666 { 00:09:41.666 "nqn": "nqn.2016-06.io.spdk:cnode30025", 00:09:41.666 "tgt_name": "foobar", 00:09:41.666 "method": "nvmf_create_subsystem", 00:09:41.666 "req_id": 1 00:09:41.666 } 00:09:41.666 Got JSON-RPC error response 00:09:41.666 response: 00:09:41.666 { 00:09:41.666 "code": -32603, 00:09:41.666 "message": "Unable to find target foobar" 00:09:41.666 }' 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:09:41.666 { 00:09:41.666 "nqn": "nqn.2016-06.io.spdk:cnode30025", 00:09:41.666 "tgt_name": "foobar", 00:09:41.666 "method": "nvmf_create_subsystem", 00:09:41.666 "req_id": 1 00:09:41.666 } 00:09:41.666 Got JSON-RPC error response 00:09:41.666 response: 00:09:41.666 { 00:09:41.666 "code": -32603, 00:09:41.666 "message": "Unable to find target foobar" 00:09:41.666 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:09:41.666 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13792 00:09:41.925 [2024-07-15 23:35:30.743645] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13792: invalid serial number 'SPDKISFASTANDAWESOME' 00:09:41.925 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:09:41.925 { 00:09:41.925 "nqn": "nqn.2016-06.io.spdk:cnode13792", 00:09:41.925 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.925 "method": "nvmf_create_subsystem", 00:09:41.925 "req_id": 1 00:09:41.925 } 00:09:41.925 Got JSON-RPC error response 00:09:41.925 response: 00:09:41.925 { 00:09:41.925 "code": -32602, 00:09:41.925 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.925 }' 00:09:41.925 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:09:41.925 { 00:09:41.925 "nqn": "nqn.2016-06.io.spdk:cnode13792", 00:09:41.925 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:09:41.925 "method": "nvmf_create_subsystem", 00:09:41.925 "req_id": 1 00:09:41.925 } 00:09:41.925 Got JSON-RPC error response 00:09:41.925 response: 00:09:41.925 { 00:09:41.925 "code": -32602, 00:09:41.925 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:09:41.925 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:41.925 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:09:41.925 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24310 00:09:42.184 [2024-07-15 23:35:30.932225] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24310: invalid model number 'SPDK_Controller' 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:09:42.184 { 00:09:42.184 "nqn": "nqn.2016-06.io.spdk:cnode24310", 00:09:42.184 "model_number": "SPDK_Controller\u001f", 00:09:42.184 "method": "nvmf_create_subsystem", 00:09:42.184 "req_id": 1 00:09:42.184 } 00:09:42.184 Got JSON-RPC error response 00:09:42.184 response: 00:09:42.184 { 00:09:42.184 "code": -32602, 00:09:42.184 "message": "Invalid MN SPDK_Controller\u001f" 00:09:42.184 }' 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:09:42.184 { 00:09:42.184 "nqn": "nqn.2016-06.io.spdk:cnode24310", 00:09:42.184 "model_number": "SPDK_Controller\u001f", 00:09:42.184 "method": "nvmf_create_subsystem", 00:09:42.184 "req_id": 1 00:09:42.184 } 00:09:42.184 Got JSON-RPC error response 00:09:42.184 response: 00:09:42.184 { 00:09:42.184 "code": -32602, 00:09:42.184 "message": "Invalid MN SPDK_Controller\u001f" 00:09:42.184 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:42.184 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:09:42.185 23:35:30 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'rJDK`$4JL[I2m4q\'\''t*r;' 00:09:42.185 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'rJDK`$4JL[I2m4q\'\''t*r;' nqn.2016-06.io.spdk:cnode6050 00:09:42.446 [2024-07-15 23:35:31.261289] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6050: invalid serial number 'rJDK`$4JL[I2m4q\'t*r;' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:09:42.446 { 00:09:42.446 "nqn": "nqn.2016-06.io.spdk:cnode6050", 00:09:42.446 "serial_number": "rJDK`$4JL[I2m4q\\'\''t*r;", 00:09:42.446 "method": "nvmf_create_subsystem", 00:09:42.446 "req_id": 1 00:09:42.446 } 00:09:42.446 Got JSON-RPC error response 00:09:42.446 response: 00:09:42.446 { 00:09:42.446 "code": -32602, 00:09:42.446 "message": "Invalid SN rJDK`$4JL[I2m4q\\'\''t*r;" 00:09:42.446 }' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:09:42.446 { 00:09:42.446 "nqn": "nqn.2016-06.io.spdk:cnode6050", 00:09:42.446 "serial_number": "rJDK`$4JL[I2m4q\\'t*r;", 00:09:42.446 "method": "nvmf_create_subsystem", 00:09:42.446 "req_id": 1 00:09:42.446 } 00:09:42.446 Got JSON-RPC error response 00:09:42.446 response: 00:09:42.446 { 00:09:42.446 "code": -32602, 00:09:42.446 "message": "Invalid SN rJDK`$4JL[I2m4q\\'t*r;" 00:09:42.446 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:09:42.446 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.447 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:09:42.709 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@31 -- # echo 'H4K(IIzA#'\''D^zR9{$MWfPL2iFljK>OtYJ9w'\''C-:!=' 00:09:42.710 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'H4K(IIzA#'\''D^zR9{$MWfPL2iFljK>OtYJ9w'\''C-:!=' nqn.2016-06.io.spdk:cnode18392 00:09:42.969 [2024-07-15 23:35:31.702743] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18392: invalid model number 'H4K(IIzA#'D^zR9{$MWfPL2iFljK>OtYJ9w'C-:!=' 00:09:42.969 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:09:42.969 { 00:09:42.969 "nqn": "nqn.2016-06.io.spdk:cnode18392", 00:09:42.969 "model_number": "H4K(IIzA#'\''D^zR9{$MWfPL2iFljK>OtYJ9w'\''C-:!=", 00:09:42.969 "method": "nvmf_create_subsystem", 00:09:42.969 "req_id": 1 00:09:42.969 } 00:09:42.969 Got JSON-RPC error response 00:09:42.969 response: 00:09:42.969 { 00:09:42.969 "code": -32602, 00:09:42.969 "message": "Invalid MN H4K(IIzA#'\''D^zR9{$MWfPL2iFljK>OtYJ9w'\''C-:!=" 00:09:42.969 }' 00:09:42.969 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:09:42.969 { 00:09:42.969 "nqn": "nqn.2016-06.io.spdk:cnode18392", 00:09:42.969 "model_number": "H4K(IIzA#'D^zR9{$MWfPL2iFljK>OtYJ9w'C-:!=", 00:09:42.969 "method": "nvmf_create_subsystem", 00:09:42.969 "req_id": 1 00:09:42.969 } 00:09:42.969 Got JSON-RPC error response 00:09:42.969 response: 00:09:42.969 { 00:09:42.969 "code": -32602, 00:09:42.969 "message": "Invalid MN H4K(IIzA#'D^zR9{$MWfPL2iFljK>OtYJ9w'C-:!=" 00:09:42.969 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:09:42.970 23:35:31 nvmf_rdma.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:09:42.970 [2024-07-15 23:35:31.908686] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5e0560/0x5e4a50) succeed. 00:09:42.970 [2024-07-15 23:35:31.917837] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5e1ba0/0x6260e0) succeed. 00:09:43.229 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:09:43.488 192.168.100.9' 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:09:43.488 [2024-07-15 23:35:32.424583] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:09:43.488 { 00:09:43.488 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.488 "listen_address": { 00:09:43.488 "trtype": "rdma", 00:09:43.488 "traddr": "192.168.100.8", 00:09:43.488 "trsvcid": "4421" 00:09:43.488 }, 00:09:43.488 "method": "nvmf_subsystem_remove_listener", 00:09:43.488 "req_id": 1 00:09:43.488 } 00:09:43.488 Got JSON-RPC error response 00:09:43.488 response: 00:09:43.488 { 00:09:43.488 "code": -32602, 00:09:43.488 "message": "Invalid parameters" 00:09:43.488 }' 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:09:43.488 { 00:09:43.488 "nqn": "nqn.2016-06.io.spdk:cnode", 00:09:43.488 "listen_address": { 00:09:43.488 "trtype": "rdma", 00:09:43.488 "traddr": "192.168.100.8", 00:09:43.488 "trsvcid": "4421" 00:09:43.488 }, 00:09:43.488 "method": "nvmf_subsystem_remove_listener", 00:09:43.488 "req_id": 1 00:09:43.488 } 00:09:43.488 Got JSON-RPC error response 00:09:43.488 response: 00:09:43.488 { 00:09:43.488 "code": -32602, 00:09:43.488 "message": "Invalid parameters" 00:09:43.488 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:09:43.488 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2624 -i 0 00:09:43.747 [2024-07-15 23:35:32.605150] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2624: invalid cntlid range [0-65519] 00:09:43.747 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:09:43.747 { 00:09:43.747 "nqn": "nqn.2016-06.io.spdk:cnode2624", 00:09:43.747 "min_cntlid": 0, 00:09:43.747 "method": "nvmf_create_subsystem", 00:09:43.747 "req_id": 1 00:09:43.747 } 00:09:43.747 Got JSON-RPC error response 00:09:43.747 response: 00:09:43.747 { 00:09:43.747 "code": -32602, 00:09:43.747 "message": "Invalid cntlid range [0-65519]" 00:09:43.747 }' 00:09:43.747 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:09:43.747 { 00:09:43.747 "nqn": "nqn.2016-06.io.spdk:cnode2624", 00:09:43.747 "min_cntlid": 0, 00:09:43.747 "method": "nvmf_create_subsystem", 00:09:43.747 "req_id": 1 00:09:43.747 } 00:09:43.747 Got JSON-RPC error response 00:09:43.747 response: 00:09:43.747 { 00:09:43.747 "code": -32602, 00:09:43.747 "message": "Invalid cntlid range [0-65519]" 00:09:43.747 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:43.747 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12151 -i 65520 00:09:44.006 [2024-07-15 23:35:32.777789] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12151: invalid cntlid range [65520-65519] 00:09:44.006 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:09:44.006 { 00:09:44.006 "nqn": "nqn.2016-06.io.spdk:cnode12151", 00:09:44.006 "min_cntlid": 65520, 00:09:44.006 "method": "nvmf_create_subsystem", 00:09:44.006 "req_id": 1 00:09:44.006 } 00:09:44.006 Got JSON-RPC error response 00:09:44.006 response: 00:09:44.006 { 00:09:44.006 "code": -32602, 00:09:44.006 "message": "Invalid cntlid range [65520-65519]" 00:09:44.006 }' 00:09:44.006 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:09:44.006 { 00:09:44.006 "nqn": "nqn.2016-06.io.spdk:cnode12151", 00:09:44.006 "min_cntlid": 65520, 00:09:44.006 "method": "nvmf_create_subsystem", 00:09:44.006 "req_id": 1 00:09:44.006 } 00:09:44.006 Got JSON-RPC error response 00:09:44.006 response: 00:09:44.006 { 00:09:44.006 "code": -32602, 00:09:44.006 "message": "Invalid cntlid range [65520-65519]" 00:09:44.006 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.006 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24782 -I 0 00:09:44.006 [2024-07-15 23:35:32.958438] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24782: invalid cntlid range [1-0] 00:09:44.266 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:09:44.266 { 00:09:44.266 "nqn": "nqn.2016-06.io.spdk:cnode24782", 00:09:44.266 "max_cntlid": 0, 00:09:44.266 "method": "nvmf_create_subsystem", 00:09:44.266 "req_id": 1 00:09:44.266 } 00:09:44.266 Got JSON-RPC error response 00:09:44.266 response: 00:09:44.266 { 00:09:44.266 "code": -32602, 00:09:44.266 "message": "Invalid cntlid range [1-0]" 00:09:44.266 }' 00:09:44.266 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:09:44.266 { 00:09:44.266 "nqn": "nqn.2016-06.io.spdk:cnode24782", 00:09:44.266 "max_cntlid": 0, 00:09:44.266 "method": "nvmf_create_subsystem", 00:09:44.266 "req_id": 1 00:09:44.266 } 00:09:44.266 Got JSON-RPC error response 00:09:44.266 response: 00:09:44.266 { 00:09:44.266 "code": -32602, 00:09:44.266 "message": "Invalid cntlid range [1-0]" 00:09:44.266 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.266 23:35:32 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6329 -I 65520 00:09:44.266 [2024-07-15 23:35:33.135086] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6329: invalid cntlid range [1-65520] 00:09:44.266 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:09:44.266 { 00:09:44.266 "nqn": "nqn.2016-06.io.spdk:cnode6329", 00:09:44.266 "max_cntlid": 65520, 00:09:44.266 "method": "nvmf_create_subsystem", 00:09:44.266 "req_id": 1 00:09:44.266 } 00:09:44.266 Got JSON-RPC error response 00:09:44.266 response: 00:09:44.266 { 00:09:44.266 "code": -32602, 00:09:44.266 "message": "Invalid cntlid range [1-65520]" 00:09:44.266 }' 00:09:44.266 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:09:44.266 { 00:09:44.266 "nqn": "nqn.2016-06.io.spdk:cnode6329", 00:09:44.266 "max_cntlid": 65520, 00:09:44.266 "method": "nvmf_create_subsystem", 00:09:44.266 "req_id": 1 00:09:44.266 } 00:09:44.266 Got JSON-RPC error response 00:09:44.266 response: 00:09:44.266 { 00:09:44.266 "code": -32602, 00:09:44.266 "message": "Invalid cntlid range [1-65520]" 00:09:44.266 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.266 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27659 -i 6 -I 5 00:09:44.524 [2024-07-15 23:35:33.315735] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27659: invalid cntlid range [6-5] 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:09:44.524 { 00:09:44.524 "nqn": "nqn.2016-06.io.spdk:cnode27659", 00:09:44.524 "min_cntlid": 6, 00:09:44.524 "max_cntlid": 5, 00:09:44.524 "method": "nvmf_create_subsystem", 00:09:44.524 "req_id": 1 00:09:44.524 } 00:09:44.524 Got JSON-RPC error response 00:09:44.524 response: 00:09:44.524 { 00:09:44.524 "code": -32602, 00:09:44.524 "message": "Invalid cntlid range [6-5]" 00:09:44.524 }' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:09:44.524 { 00:09:44.524 "nqn": "nqn.2016-06.io.spdk:cnode27659", 00:09:44.524 "min_cntlid": 6, 00:09:44.524 "max_cntlid": 5, 00:09:44.524 "method": "nvmf_create_subsystem", 00:09:44.524 "req_id": 1 00:09:44.524 } 00:09:44.524 Got JSON-RPC error response 00:09:44.524 response: 00:09:44.524 { 00:09:44.524 "code": -32602, 00:09:44.524 "message": "Invalid cntlid range [6-5]" 00:09:44.524 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:09:44.524 { 00:09:44.524 "name": "foobar", 00:09:44.524 "method": "nvmf_delete_target", 00:09:44.524 "req_id": 1 00:09:44.524 } 00:09:44.524 Got JSON-RPC error response 00:09:44.524 response: 00:09:44.524 { 00:09:44.524 "code": -32602, 00:09:44.524 "message": "The specified target doesn'\''t exist, cannot delete it." 00:09:44.524 }' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:09:44.524 { 00:09:44.524 "name": "foobar", 00:09:44.524 "method": "nvmf_delete_target", 00:09:44.524 "req_id": 1 00:09:44.524 } 00:09:44.524 Got JSON-RPC error response 00:09:44.524 response: 00:09:44.524 { 00:09:44.524 "code": -32602, 00:09:44.524 "message": "The specified target doesn't exist, cannot delete it." 00:09:44.524 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:44.524 rmmod nvme_rdma 00:09:44.524 rmmod nvme_fabrics 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1354044 ']' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1354044 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@942 -- # '[' -z 1354044 ']' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@946 -- # kill -0 1354044 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@947 -- # uname 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:44.524 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1354044 00:09:44.782 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:09:44.782 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:09:44.782 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1354044' 00:09:44.782 killing process with pid 1354044 00:09:44.782 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@961 -- # kill 1354044 00:09:44.782 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@966 -- # wait 1354044 00:09:45.041 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:45.041 00:09:45.041 real 0m9.476s 00:09:45.041 user 0m19.924s 00:09:45.041 sys 0m4.780s 00:09:45.041 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:45.041 23:35:33 nvmf_rdma.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:09:45.041 ************************************ 00:09:45.041 END TEST nvmf_invalid 00:09:45.041 ************************************ 00:09:45.041 23:35:33 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:09:45.041 23:35:33 nvmf_rdma -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:45.041 23:35:33 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:45.041 23:35:33 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:45.041 23:35:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:45.041 ************************************ 00:09:45.041 START TEST nvmf_abort 00:09:45.041 ************************************ 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:09:45.041 * Looking for test storage... 00:09:45.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.041 23:35:33 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:50.312 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:50.312 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:50.312 Found net devices under 0000:da:00.0: mlx_0_0 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:50.312 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:50.313 Found net devices under 0000:da:00.1: mlx_0_1 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:50.313 23:35:38 nvmf_rdma.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:50.313 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.313 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:50.313 altname enp218s0f0np0 00:09:50.313 altname ens818f0np0 00:09:50.313 inet 192.168.100.8/24 scope global mlx_0_0 00:09:50.313 valid_lft forever preferred_lft forever 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:50.313 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:50.313 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:50.313 altname enp218s0f1np1 00:09:50.313 altname ens818f1np1 00:09:50.313 inet 192.168.100.9/24 scope global mlx_0_1 00:09:50.313 valid_lft forever preferred_lft forever 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:50.313 192.168.100.9' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:50.313 192.168.100.9' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:50.313 192.168.100.9' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1357961 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1357961 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@823 -- # '[' -z 1357961 ']' 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:50.313 23:35:39 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:50.313 [2024-07-15 23:35:39.245524] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:50.313 [2024-07-15 23:35:39.245575] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.572 [2024-07-15 23:35:39.301097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:50.572 [2024-07-15 23:35:39.374980] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.572 [2024-07-15 23:35:39.375020] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.572 [2024-07-15 23:35:39.375027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.572 [2024-07-15 23:35:39.375033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.572 [2024-07-15 23:35:39.375037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.572 [2024-07-15 23:35:39.375143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.572 [2024-07-15 23:35:39.375228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.572 [2024-07-15 23:35:39.375229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@856 -- # return 0 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.140 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 [2024-07-15 23:35:40.123845] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1bfe200/0x1c026f0) succeed. 00:09:51.399 [2024-07-15 23:35:40.134007] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1bff7a0/0x1c43d80) succeed. 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 Malloc0 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 Delay0 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 [2024-07-15 23:35:40.283625] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:51.399 23:35:40 nvmf_rdma.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:51.399 [2024-07-15 23:35:40.371274] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:53.933 Initializing NVMe Controllers 00:09:53.933 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:09:53.933 controller IO queue size 128 less than required 00:09:53.933 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:53.934 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:53.934 Initialization complete. Launching workers. 00:09:53.934 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51342 00:09:53.934 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51403, failed to submit 62 00:09:53.934 success 51343, unsuccess 60, failed 0 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@553 -- # xtrace_disable 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:53.934 rmmod nvme_rdma 00:09:53.934 rmmod nvme_fabrics 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1357961 ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@942 -- # '[' -z 1357961 ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@946 -- # kill -0 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@947 -- # uname 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1357961' 00:09:53.934 killing process with pid 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@961 -- # kill 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@966 -- # wait 1357961 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:53.934 00:09:53.934 real 0m8.972s 00:09:53.934 user 0m14.044s 00:09:53.934 sys 0m4.300s 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@1118 -- # xtrace_disable 00:09:53.934 23:35:42 nvmf_rdma.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:53.934 ************************************ 00:09:53.934 END TEST nvmf_abort 00:09:53.934 ************************************ 00:09:53.934 23:35:42 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:09:53.934 23:35:42 nvmf_rdma -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:53.934 23:35:42 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:09:53.934 23:35:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:09:53.934 23:35:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:09:53.934 ************************************ 00:09:53.934 START TEST nvmf_ns_hotplug_stress 00:09:53.934 ************************************ 00:09:53.934 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:09:54.193 * Looking for test storage... 00:09:54.193 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.193 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.194 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.194 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.194 23:35:42 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:09:54.194 23:35:43 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:59.469 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:59.469 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:59.469 Found net devices under 0000:da:00.0: mlx_0_0 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:59.469 Found net devices under 0000:da:00.1: mlx_0_1 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:59.469 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:59.470 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.470 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:09:59.470 altname enp218s0f0np0 00:09:59.470 altname ens818f0np0 00:09:59.470 inet 192.168.100.8/24 scope global mlx_0_0 00:09:59.470 valid_lft forever preferred_lft forever 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:59.470 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:59.470 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:09:59.470 altname enp218s0f1np1 00:09:59.470 altname ens818f1np1 00:09:59.470 inet 192.168.100.9/24 scope global mlx_0_1 00:09:59.470 valid_lft forever preferred_lft forever 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:59.470 192.168.100.9' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:59.470 192.168.100.9' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:59.470 192.168.100.9' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1361620 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1361620 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@823 -- # '[' -z 1361620 ']' 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:09:59.470 23:35:47 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:59.470 [2024-07-15 23:35:47.984809] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:09:59.470 [2024-07-15 23:35:47.984853] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.470 [2024-07-15 23:35:48.038870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:59.470 [2024-07-15 23:35:48.119014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.470 [2024-07-15 23:35:48.119048] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.470 [2024-07-15 23:35:48.119055] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.470 [2024-07-15 23:35:48.119061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.470 [2024-07-15 23:35:48.119066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.470 [2024-07-15 23:35:48.119102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.470 [2024-07-15 23:35:48.119187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.470 [2024-07-15 23:35:48.119188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # return 0 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:00.036 23:35:48 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:00.036 [2024-07-15 23:35:48.999748] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2279200/0x227d6f0) succeed. 00:10:00.036 [2024-07-15 23:35:49.008658] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x227a7a0/0x22bed80) succeed. 00:10:00.295 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.553 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:00.553 [2024-07-15 23:35:49.456321] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:00.553 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:10:00.811 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:01.068 Malloc0 00:10:01.068 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.068 Delay0 00:10:01.068 23:35:49 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:01.326 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:01.584 NULL1 00:10:01.584 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:01.842 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:01.842 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1362110 00:10:01.842 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:01.842 23:35:50 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.778 Read completed with error (sct=0, sc=11) 00:10:02.778 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:02.778 23:35:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:03.036 23:35:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:03.036 23:35:51 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:03.293 true 00:10:03.293 23:35:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:03.293 23:35:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 23:35:52 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:04.239 23:35:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:04.239 23:35:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:04.497 true 00:10:04.497 23:35:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:04.497 23:35:53 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 23:35:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:05.431 23:35:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:05.431 23:35:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:05.687 true 00:10:05.687 23:35:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:05.687 23:35:54 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.619 23:35:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:06.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:06.620 23:35:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:06.620 23:35:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:06.877 true 00:10:06.877 23:35:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:06.877 23:35:55 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 23:35:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:07.812 23:35:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:07.812 23:35:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:08.070 true 00:10:08.070 23:35:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:08.070 23:35:56 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.005 23:35:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:09.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.005 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:09.005 23:35:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:09.005 23:35:57 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:09.263 true 00:10:09.263 23:35:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:09.263 23:35:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 23:35:58 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:10.198 23:35:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:10.198 23:35:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:10.456 true 00:10:10.456 23:35:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:10.456 23:35:59 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 23:36:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:11.393 23:36:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:11.393 23:36:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:11.651 true 00:10:11.651 23:36:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:11.651 23:36:00 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 23:36:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:12.586 23:36:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:12.586 23:36:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:12.844 true 00:10:12.844 23:36:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:12.844 23:36:01 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 23:36:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:13.788 23:36:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:13.788 23:36:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:14.046 true 00:10:14.046 23:36:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:14.046 23:36:02 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 23:36:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:14.983 23:36:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:14.983 23:36:03 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:15.242 true 00:10:15.242 23:36:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:15.242 23:36:04 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 23:36:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.179 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:16.436 23:36:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:16.436 23:36:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:16.436 true 00:10:16.436 23:36:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:16.436 23:36:05 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 23:36:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:17.478 23:36:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:17.478 23:36:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:17.735 true 00:10:17.735 23:36:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:17.735 23:36:06 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 23:36:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:18.669 23:36:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:18.669 23:36:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:18.927 true 00:10:18.927 23:36:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:18.927 23:36:07 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 23:36:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:19.862 23:36:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:19.862 23:36:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:20.120 true 00:10:20.120 23:36:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:20.120 23:36:08 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:21.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 23:36:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:21.055 23:36:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:21.055 23:36:09 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:21.313 true 00:10:21.313 23:36:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:21.313 23:36:10 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 23:36:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:22.509 23:36:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:22.509 23:36:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:22.509 true 00:10:22.509 23:36:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:22.509 23:36:11 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 23:36:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:23.704 23:36:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:23.704 23:36:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:23.704 true 00:10:23.704 23:36:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:23.704 23:36:12 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 23:36:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.640 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:24.899 23:36:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:24.899 23:36:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:24.899 true 00:10:24.899 23:36:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:24.899 23:36:13 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 23:36:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:25.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:26.093 23:36:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:26.093 23:36:14 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:26.093 true 00:10:26.093 23:36:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:26.093 23:36:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 23:36:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:27.025 23:36:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:27.025 23:36:15 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:27.282 true 00:10:27.282 23:36:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:27.282 23:36:16 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.217 23:36:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:28.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:28.217 23:36:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:28.217 23:36:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:28.475 true 00:10:28.475 23:36:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:28.475 23:36:17 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 23:36:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:29.410 23:36:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:29.410 23:36:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:29.668 true 00:10:29.668 23:36:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:29.668 23:36:18 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 23:36:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:30.602 23:36:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:30.602 23:36:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:30.860 true 00:10:30.860 23:36:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:30.860 23:36:19 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 23:36:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:31.794 23:36:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:31.794 23:36:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:32.053 true 00:10:32.053 23:36:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:32.053 23:36:20 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:32.987 23:36:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:32.987 23:36:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:32.987 23:36:21 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:33.245 true 00:10:33.245 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:33.245 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:33.503 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:33.760 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:33.760 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:33.760 true 00:10:33.760 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:33.760 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.017 23:36:22 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.275 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:34.275 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:34.275 true 00:10:34.275 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:34.275 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.532 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:34.791 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:34.791 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:34.791 true 00:10:34.791 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:34.791 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:34.791 Initializing NVMe Controllers 00:10:34.791 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:34.791 Controller IO queue size 128, less than required. 00:10:34.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:34.791 Controller IO queue size 128, less than required. 00:10:34.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:34.791 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:34.791 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:10:34.791 Initialization complete. Launching workers. 00:10:34.791 ======================================================== 00:10:34.791 Latency(us) 00:10:34.791 Device Information : IOPS MiB/s Average min max 00:10:34.791 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5971.20 2.92 19528.56 947.03 1143474.71 00:10:34.791 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 33984.20 16.59 3766.34 1719.51 298691.60 00:10:34.791 ======================================================== 00:10:34.791 Total : 39955.40 19.51 6121.95 947.03 1143474.71 00:10:34.791 00:10:35.049 23:36:23 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:35.307 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:35.307 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:35.307 true 00:10:35.307 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1362110 00:10:35.307 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1362110) - No such process 00:10:35.307 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1362110 00:10:35.307 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:35.565 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:10:35.824 null0 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:35.824 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:10:36.083 null1 00:10:36.083 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.083 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.083 23:36:24 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:10:36.341 null2 00:10:36.341 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.341 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.341 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:10:36.341 null3 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:10:36.598 null4 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.598 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:36.856 null5 00:10:36.856 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:36.856 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:36.856 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:37.115 null6 00:10:37.115 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.115 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.115 23:36:25 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:37.115 null7 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1368554 1368557 1368560 1368564 1368566 1368569 1368572 1368576 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.115 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.373 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.631 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.890 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:37.891 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.151 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.151 23:36:26 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.151 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.410 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:38.668 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:38.927 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.186 23:36:27 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.186 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.443 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.700 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.957 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.957 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.957 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:39.957 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.957 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.958 23:36:28 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.215 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.473 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:40.732 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:40.991 rmmod nvme_rdma 00:10:40.991 rmmod nvme_fabrics 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1361620 ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1361620 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@942 -- # '[' -z 1361620 ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # kill -0 1361620 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # uname 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1361620 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1361620' 00:10:40.991 killing process with pid 1361620 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@961 -- # kill 1361620 00:10:40.991 23:36:29 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # wait 1361620 00:10:41.249 23:36:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:41.249 23:36:30 nvmf_rdma.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:41.249 00:10:41.249 real 0m47.323s 00:10:41.249 user 3m21.193s 00:10:41.249 sys 0m11.258s 00:10:41.249 23:36:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:41.249 23:36:30 nvmf_rdma.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:41.249 ************************************ 00:10:41.249 END TEST nvmf_ns_hotplug_stress 00:10:41.249 ************************************ 00:10:41.509 23:36:30 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:10:41.509 23:36:30 nvmf_rdma -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:41.509 23:36:30 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:41.509 23:36:30 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:41.509 23:36:30 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:41.509 ************************************ 00:10:41.509 START TEST nvmf_connect_stress 00:10:41.509 ************************************ 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:10:41.509 * Looking for test storage... 00:10:41.509 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.509 23:36:30 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:46.777 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:46.777 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.777 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:46.778 Found net devices under 0000:da:00.0: mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:46.778 Found net devices under 0000:da:00.1: mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:46.778 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.778 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:10:46.778 altname enp218s0f0np0 00:10:46.778 altname ens818f0np0 00:10:46.778 inet 192.168.100.8/24 scope global mlx_0_0 00:10:46.778 valid_lft forever preferred_lft forever 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:46.778 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:46.778 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:10:46.778 altname enp218s0f1np1 00:10:46.778 altname ens818f1np1 00:10:46.778 inet 192.168.100.9/24 scope global mlx_0_1 00:10:46.778 valid_lft forever preferred_lft forever 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:46.778 192.168.100.9' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:46.778 192.168.100.9' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:46.778 192.168.100.9' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.778 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1372403 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1372403 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@823 -- # '[' -z 1372403 ']' 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@828 -- # local max_retries=100 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@832 -- # xtrace_disable 00:10:46.779 23:36:35 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:46.779 [2024-07-15 23:36:35.642829] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:10:46.779 [2024-07-15 23:36:35.642882] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.779 [2024-07-15 23:36:35.698091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:47.038 [2024-07-15 23:36:35.783176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.038 [2024-07-15 23:36:35.783209] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.038 [2024-07-15 23:36:35.783216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.038 [2024-07-15 23:36:35.783223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.038 [2024-07-15 23:36:35.783228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.038 [2024-07-15 23:36:35.783326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.038 [2024-07-15 23:36:35.783389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.038 [2024-07-15 23:36:35.783390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@856 -- # return 0 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:47.605 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.605 [2024-07-15 23:36:36.519026] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x221d200/0x22216f0) succeed. 00:10:47.605 [2024-07-15 23:36:36.527910] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x221e7a0/0x2262d80) succeed. 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.864 [2024-07-15 23:36:36.634367] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:47.864 NULL1 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1372615 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:10:47.864 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:10:47.865 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:47.865 23:36:36 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:47.865 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:47.865 23:36:36 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.123 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.123 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:48.123 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.123 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.123 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.690 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.690 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:48.690 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.690 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.690 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:48.948 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:48.948 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:48.948 23:36:37 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:48.948 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:48.948 23:36:37 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.210 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:49.210 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:49.210 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.210 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:49.210 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.522 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:49.522 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:49.522 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.522 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:49.522 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:49.781 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:49.781 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:49.781 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:49.781 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:49.781 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.040 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:50.040 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:50.040 23:36:38 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.040 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:50.040 23:36:38 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.606 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:50.606 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:50.606 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.606 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:50.606 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:50.864 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:50.864 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:50.864 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:50.864 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:50.864 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.122 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:51.122 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:51.122 23:36:39 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.122 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:51.122 23:36:39 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.388 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:51.388 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:51.388 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.388 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:51.388 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:51.652 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:51.652 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:51.652 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:51.652 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:51.652 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.217 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.217 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:52.217 23:36:40 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.217 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.217 23:36:40 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.475 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.475 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:52.475 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.475 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.475 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.733 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.733 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:52.733 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.733 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.733 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:52.990 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:52.990 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:52.990 23:36:41 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:52.990 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:52.990 23:36:41 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.248 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:53.248 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:53.248 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.248 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:53.248 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:53.812 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:53.812 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:53.812 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:53.812 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:53.812 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.068 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:54.068 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:54.068 23:36:42 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.068 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:54.068 23:36:42 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.367 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:54.367 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:54.367 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.367 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:54.367 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.624 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:54.624 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:54.624 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.624 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:54.624 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:54.881 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:54.881 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:54.881 23:36:43 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:54.881 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:54.881 23:36:43 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.445 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:55.445 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:55.445 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.445 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:55.445 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.704 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:55.704 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:55.704 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.704 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:55.704 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:55.963 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:55.963 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:55.963 23:36:44 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:55.963 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:55.963 23:36:44 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.222 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:56.222 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:56.222 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.222 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:56.222 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:56.480 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:56.480 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:56.480 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:56.480 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:56.480 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.047 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:57.047 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:57.047 23:36:45 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.047 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:57.047 23:36:45 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.305 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:57.305 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:57.305 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.305 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:57.305 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.563 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:57.563 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:57.563 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.563 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:57.563 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:57.820 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:57.820 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:57.820 23:36:46 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:10:57.820 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@553 -- # xtrace_disable 00:10:57.820 23:36:46 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.077 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:10:58.077 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:10:58.077 23:36:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1372615 00:10:58.077 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1372615) - No such process 00:10:58.077 23:36:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1372615 00:10:58.077 23:36:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:58.335 rmmod nvme_rdma 00:10:58.335 rmmod nvme_fabrics 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1372403 ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1372403 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@942 -- # '[' -z 1372403 ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@946 -- # kill -0 1372403 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@947 -- # uname 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1372403 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1372403' 00:10:58.335 killing process with pid 1372403 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@961 -- # kill 1372403 00:10:58.335 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@966 -- # wait 1372403 00:10:58.594 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:58.594 23:36:47 nvmf_rdma.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:58.594 00:10:58.594 real 0m17.131s 00:10:58.594 user 0m42.655s 00:10:58.594 sys 0m5.586s 00:10:58.594 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@1118 -- # xtrace_disable 00:10:58.594 23:36:47 nvmf_rdma.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:10:58.594 ************************************ 00:10:58.594 END TEST nvmf_connect_stress 00:10:58.594 ************************************ 00:10:58.594 23:36:47 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:10:58.594 23:36:47 nvmf_rdma -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:58.594 23:36:47 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:10:58.594 23:36:47 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:10:58.594 23:36:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:10:58.594 ************************************ 00:10:58.594 START TEST nvmf_fused_ordering 00:10:58.594 ************************************ 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:10:58.594 * Looking for test storage... 00:10:58.594 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.594 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.853 23:36:47 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.129 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:04.130 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:04.130 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:04.130 Found net devices under 0000:da:00.0: mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:04.130 Found net devices under 0000:da:00.1: mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:04.130 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.130 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:04.130 altname enp218s0f0np0 00:11:04.130 altname ens818f0np0 00:11:04.130 inet 192.168.100.8/24 scope global mlx_0_0 00:11:04.130 valid_lft forever preferred_lft forever 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:04.130 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:04.130 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:04.130 altname enp218s0f1np1 00:11:04.130 altname ens818f1np1 00:11:04.130 inet 192.168.100.9/24 scope global mlx_0_1 00:11:04.130 valid_lft forever preferred_lft forever 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:11:04.130 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:04.131 192.168.100.9' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:04.131 192.168.100.9' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:04.131 192.168.100.9' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1377524 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1377524 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@823 -- # '[' -z 1377524 ']' 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:04.131 23:36:52 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:04.131 [2024-07-15 23:36:52.929436] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:04.131 [2024-07-15 23:36:52.929480] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.131 [2024-07-15 23:36:52.986229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.131 [2024-07-15 23:36:53.060250] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.131 [2024-07-15 23:36:53.060289] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.131 [2024-07-15 23:36:53.060295] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.131 [2024-07-15 23:36:53.060301] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.131 [2024-07-15 23:36:53.060305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.131 [2024-07-15 23:36:53.060321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # return 0 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 [2024-07-15 23:36:53.792207] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x898c20/0x89d110) succeed. 00:11:05.064 [2024-07-15 23:36:53.801146] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x89a120/0x8de7a0) succeed. 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 [2024-07-15 23:36:53.862775] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 NULL1 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:05.064 23:36:53 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:05.064 [2024-07-15 23:36:53.915099] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:05.064 [2024-07-15 23:36:53.915130] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1377772 ] 00:11:05.323 Attached to nqn.2016-06.io.spdk:cnode1 00:11:05.323 Namespace ID: 1 size: 1GB 00:11:05.323 fused_ordering(0) 00:11:05.323 fused_ordering(1) 00:11:05.323 fused_ordering(2) 00:11:05.323 fused_ordering(3) 00:11:05.323 fused_ordering(4) 00:11:05.323 fused_ordering(5) 00:11:05.323 fused_ordering(6) 00:11:05.323 fused_ordering(7) 00:11:05.323 fused_ordering(8) 00:11:05.323 fused_ordering(9) 00:11:05.323 fused_ordering(10) 00:11:05.323 fused_ordering(11) 00:11:05.323 fused_ordering(12) 00:11:05.323 fused_ordering(13) 00:11:05.323 fused_ordering(14) 00:11:05.323 fused_ordering(15) 00:11:05.323 fused_ordering(16) 00:11:05.323 fused_ordering(17) 00:11:05.323 fused_ordering(18) 00:11:05.323 fused_ordering(19) 00:11:05.323 fused_ordering(20) 00:11:05.323 fused_ordering(21) 00:11:05.323 fused_ordering(22) 00:11:05.323 fused_ordering(23) 00:11:05.323 fused_ordering(24) 00:11:05.323 fused_ordering(25) 00:11:05.323 fused_ordering(26) 00:11:05.323 fused_ordering(27) 00:11:05.323 fused_ordering(28) 00:11:05.323 fused_ordering(29) 00:11:05.323 fused_ordering(30) 00:11:05.323 fused_ordering(31) 00:11:05.323 fused_ordering(32) 00:11:05.323 fused_ordering(33) 00:11:05.323 fused_ordering(34) 00:11:05.323 fused_ordering(35) 00:11:05.323 fused_ordering(36) 00:11:05.323 fused_ordering(37) 00:11:05.323 fused_ordering(38) 00:11:05.323 fused_ordering(39) 00:11:05.323 fused_ordering(40) 00:11:05.323 fused_ordering(41) 00:11:05.323 fused_ordering(42) 00:11:05.323 fused_ordering(43) 00:11:05.323 fused_ordering(44) 00:11:05.323 fused_ordering(45) 00:11:05.323 fused_ordering(46) 00:11:05.323 fused_ordering(47) 00:11:05.323 fused_ordering(48) 00:11:05.323 fused_ordering(49) 00:11:05.323 fused_ordering(50) 00:11:05.323 fused_ordering(51) 00:11:05.323 fused_ordering(52) 00:11:05.323 fused_ordering(53) 00:11:05.323 fused_ordering(54) 00:11:05.323 fused_ordering(55) 00:11:05.323 fused_ordering(56) 00:11:05.323 fused_ordering(57) 00:11:05.323 fused_ordering(58) 00:11:05.323 fused_ordering(59) 00:11:05.323 fused_ordering(60) 00:11:05.323 fused_ordering(61) 00:11:05.323 fused_ordering(62) 00:11:05.323 fused_ordering(63) 00:11:05.323 fused_ordering(64) 00:11:05.323 fused_ordering(65) 00:11:05.323 fused_ordering(66) 00:11:05.323 fused_ordering(67) 00:11:05.323 fused_ordering(68) 00:11:05.323 fused_ordering(69) 00:11:05.323 fused_ordering(70) 00:11:05.323 fused_ordering(71) 00:11:05.323 fused_ordering(72) 00:11:05.323 fused_ordering(73) 00:11:05.323 fused_ordering(74) 00:11:05.323 fused_ordering(75) 00:11:05.323 fused_ordering(76) 00:11:05.323 fused_ordering(77) 00:11:05.323 fused_ordering(78) 00:11:05.323 fused_ordering(79) 00:11:05.323 fused_ordering(80) 00:11:05.323 fused_ordering(81) 00:11:05.323 fused_ordering(82) 00:11:05.323 fused_ordering(83) 00:11:05.323 fused_ordering(84) 00:11:05.323 fused_ordering(85) 00:11:05.323 fused_ordering(86) 00:11:05.323 fused_ordering(87) 00:11:05.323 fused_ordering(88) 00:11:05.323 fused_ordering(89) 00:11:05.323 fused_ordering(90) 00:11:05.323 fused_ordering(91) 00:11:05.323 fused_ordering(92) 00:11:05.323 fused_ordering(93) 00:11:05.323 fused_ordering(94) 00:11:05.323 fused_ordering(95) 00:11:05.323 fused_ordering(96) 00:11:05.323 fused_ordering(97) 00:11:05.323 fused_ordering(98) 00:11:05.323 fused_ordering(99) 00:11:05.323 fused_ordering(100) 00:11:05.323 fused_ordering(101) 00:11:05.323 fused_ordering(102) 00:11:05.323 fused_ordering(103) 00:11:05.323 fused_ordering(104) 00:11:05.323 fused_ordering(105) 00:11:05.323 fused_ordering(106) 00:11:05.323 fused_ordering(107) 00:11:05.323 fused_ordering(108) 00:11:05.323 fused_ordering(109) 00:11:05.323 fused_ordering(110) 00:11:05.323 fused_ordering(111) 00:11:05.323 fused_ordering(112) 00:11:05.323 fused_ordering(113) 00:11:05.323 fused_ordering(114) 00:11:05.323 fused_ordering(115) 00:11:05.323 fused_ordering(116) 00:11:05.323 fused_ordering(117) 00:11:05.323 fused_ordering(118) 00:11:05.323 fused_ordering(119) 00:11:05.323 fused_ordering(120) 00:11:05.323 fused_ordering(121) 00:11:05.323 fused_ordering(122) 00:11:05.323 fused_ordering(123) 00:11:05.323 fused_ordering(124) 00:11:05.323 fused_ordering(125) 00:11:05.323 fused_ordering(126) 00:11:05.323 fused_ordering(127) 00:11:05.323 fused_ordering(128) 00:11:05.323 fused_ordering(129) 00:11:05.323 fused_ordering(130) 00:11:05.323 fused_ordering(131) 00:11:05.323 fused_ordering(132) 00:11:05.323 fused_ordering(133) 00:11:05.323 fused_ordering(134) 00:11:05.323 fused_ordering(135) 00:11:05.323 fused_ordering(136) 00:11:05.323 fused_ordering(137) 00:11:05.323 fused_ordering(138) 00:11:05.323 fused_ordering(139) 00:11:05.323 fused_ordering(140) 00:11:05.323 fused_ordering(141) 00:11:05.323 fused_ordering(142) 00:11:05.323 fused_ordering(143) 00:11:05.323 fused_ordering(144) 00:11:05.323 fused_ordering(145) 00:11:05.323 fused_ordering(146) 00:11:05.323 fused_ordering(147) 00:11:05.323 fused_ordering(148) 00:11:05.323 fused_ordering(149) 00:11:05.323 fused_ordering(150) 00:11:05.323 fused_ordering(151) 00:11:05.323 fused_ordering(152) 00:11:05.323 fused_ordering(153) 00:11:05.323 fused_ordering(154) 00:11:05.323 fused_ordering(155) 00:11:05.323 fused_ordering(156) 00:11:05.323 fused_ordering(157) 00:11:05.323 fused_ordering(158) 00:11:05.323 fused_ordering(159) 00:11:05.323 fused_ordering(160) 00:11:05.323 fused_ordering(161) 00:11:05.323 fused_ordering(162) 00:11:05.323 fused_ordering(163) 00:11:05.323 fused_ordering(164) 00:11:05.323 fused_ordering(165) 00:11:05.323 fused_ordering(166) 00:11:05.323 fused_ordering(167) 00:11:05.323 fused_ordering(168) 00:11:05.323 fused_ordering(169) 00:11:05.323 fused_ordering(170) 00:11:05.323 fused_ordering(171) 00:11:05.323 fused_ordering(172) 00:11:05.323 fused_ordering(173) 00:11:05.323 fused_ordering(174) 00:11:05.323 fused_ordering(175) 00:11:05.323 fused_ordering(176) 00:11:05.323 fused_ordering(177) 00:11:05.323 fused_ordering(178) 00:11:05.323 fused_ordering(179) 00:11:05.323 fused_ordering(180) 00:11:05.323 fused_ordering(181) 00:11:05.323 fused_ordering(182) 00:11:05.323 fused_ordering(183) 00:11:05.323 fused_ordering(184) 00:11:05.323 fused_ordering(185) 00:11:05.323 fused_ordering(186) 00:11:05.323 fused_ordering(187) 00:11:05.323 fused_ordering(188) 00:11:05.323 fused_ordering(189) 00:11:05.323 fused_ordering(190) 00:11:05.323 fused_ordering(191) 00:11:05.323 fused_ordering(192) 00:11:05.323 fused_ordering(193) 00:11:05.323 fused_ordering(194) 00:11:05.323 fused_ordering(195) 00:11:05.323 fused_ordering(196) 00:11:05.323 fused_ordering(197) 00:11:05.323 fused_ordering(198) 00:11:05.323 fused_ordering(199) 00:11:05.323 fused_ordering(200) 00:11:05.323 fused_ordering(201) 00:11:05.323 fused_ordering(202) 00:11:05.323 fused_ordering(203) 00:11:05.323 fused_ordering(204) 00:11:05.323 fused_ordering(205) 00:11:05.323 fused_ordering(206) 00:11:05.323 fused_ordering(207) 00:11:05.323 fused_ordering(208) 00:11:05.323 fused_ordering(209) 00:11:05.323 fused_ordering(210) 00:11:05.323 fused_ordering(211) 00:11:05.323 fused_ordering(212) 00:11:05.323 fused_ordering(213) 00:11:05.323 fused_ordering(214) 00:11:05.323 fused_ordering(215) 00:11:05.323 fused_ordering(216) 00:11:05.323 fused_ordering(217) 00:11:05.323 fused_ordering(218) 00:11:05.323 fused_ordering(219) 00:11:05.323 fused_ordering(220) 00:11:05.323 fused_ordering(221) 00:11:05.323 fused_ordering(222) 00:11:05.323 fused_ordering(223) 00:11:05.323 fused_ordering(224) 00:11:05.323 fused_ordering(225) 00:11:05.323 fused_ordering(226) 00:11:05.323 fused_ordering(227) 00:11:05.323 fused_ordering(228) 00:11:05.323 fused_ordering(229) 00:11:05.323 fused_ordering(230) 00:11:05.323 fused_ordering(231) 00:11:05.323 fused_ordering(232) 00:11:05.323 fused_ordering(233) 00:11:05.323 fused_ordering(234) 00:11:05.323 fused_ordering(235) 00:11:05.323 fused_ordering(236) 00:11:05.323 fused_ordering(237) 00:11:05.323 fused_ordering(238) 00:11:05.323 fused_ordering(239) 00:11:05.323 fused_ordering(240) 00:11:05.323 fused_ordering(241) 00:11:05.323 fused_ordering(242) 00:11:05.323 fused_ordering(243) 00:11:05.323 fused_ordering(244) 00:11:05.324 fused_ordering(245) 00:11:05.324 fused_ordering(246) 00:11:05.324 fused_ordering(247) 00:11:05.324 fused_ordering(248) 00:11:05.324 fused_ordering(249) 00:11:05.324 fused_ordering(250) 00:11:05.324 fused_ordering(251) 00:11:05.324 fused_ordering(252) 00:11:05.324 fused_ordering(253) 00:11:05.324 fused_ordering(254) 00:11:05.324 fused_ordering(255) 00:11:05.324 fused_ordering(256) 00:11:05.324 fused_ordering(257) 00:11:05.324 fused_ordering(258) 00:11:05.324 fused_ordering(259) 00:11:05.324 fused_ordering(260) 00:11:05.324 fused_ordering(261) 00:11:05.324 fused_ordering(262) 00:11:05.324 fused_ordering(263) 00:11:05.324 fused_ordering(264) 00:11:05.324 fused_ordering(265) 00:11:05.324 fused_ordering(266) 00:11:05.324 fused_ordering(267) 00:11:05.324 fused_ordering(268) 00:11:05.324 fused_ordering(269) 00:11:05.324 fused_ordering(270) 00:11:05.324 fused_ordering(271) 00:11:05.324 fused_ordering(272) 00:11:05.324 fused_ordering(273) 00:11:05.324 fused_ordering(274) 00:11:05.324 fused_ordering(275) 00:11:05.324 fused_ordering(276) 00:11:05.324 fused_ordering(277) 00:11:05.324 fused_ordering(278) 00:11:05.324 fused_ordering(279) 00:11:05.324 fused_ordering(280) 00:11:05.324 fused_ordering(281) 00:11:05.324 fused_ordering(282) 00:11:05.324 fused_ordering(283) 00:11:05.324 fused_ordering(284) 00:11:05.324 fused_ordering(285) 00:11:05.324 fused_ordering(286) 00:11:05.324 fused_ordering(287) 00:11:05.324 fused_ordering(288) 00:11:05.324 fused_ordering(289) 00:11:05.324 fused_ordering(290) 00:11:05.324 fused_ordering(291) 00:11:05.324 fused_ordering(292) 00:11:05.324 fused_ordering(293) 00:11:05.324 fused_ordering(294) 00:11:05.324 fused_ordering(295) 00:11:05.324 fused_ordering(296) 00:11:05.324 fused_ordering(297) 00:11:05.324 fused_ordering(298) 00:11:05.324 fused_ordering(299) 00:11:05.324 fused_ordering(300) 00:11:05.324 fused_ordering(301) 00:11:05.324 fused_ordering(302) 00:11:05.324 fused_ordering(303) 00:11:05.324 fused_ordering(304) 00:11:05.324 fused_ordering(305) 00:11:05.324 fused_ordering(306) 00:11:05.324 fused_ordering(307) 00:11:05.324 fused_ordering(308) 00:11:05.324 fused_ordering(309) 00:11:05.324 fused_ordering(310) 00:11:05.324 fused_ordering(311) 00:11:05.324 fused_ordering(312) 00:11:05.324 fused_ordering(313) 00:11:05.324 fused_ordering(314) 00:11:05.324 fused_ordering(315) 00:11:05.324 fused_ordering(316) 00:11:05.324 fused_ordering(317) 00:11:05.324 fused_ordering(318) 00:11:05.324 fused_ordering(319) 00:11:05.324 fused_ordering(320) 00:11:05.324 fused_ordering(321) 00:11:05.324 fused_ordering(322) 00:11:05.324 fused_ordering(323) 00:11:05.324 fused_ordering(324) 00:11:05.324 fused_ordering(325) 00:11:05.324 fused_ordering(326) 00:11:05.324 fused_ordering(327) 00:11:05.324 fused_ordering(328) 00:11:05.324 fused_ordering(329) 00:11:05.324 fused_ordering(330) 00:11:05.324 fused_ordering(331) 00:11:05.324 fused_ordering(332) 00:11:05.324 fused_ordering(333) 00:11:05.324 fused_ordering(334) 00:11:05.324 fused_ordering(335) 00:11:05.324 fused_ordering(336) 00:11:05.324 fused_ordering(337) 00:11:05.324 fused_ordering(338) 00:11:05.324 fused_ordering(339) 00:11:05.324 fused_ordering(340) 00:11:05.324 fused_ordering(341) 00:11:05.324 fused_ordering(342) 00:11:05.324 fused_ordering(343) 00:11:05.324 fused_ordering(344) 00:11:05.324 fused_ordering(345) 00:11:05.324 fused_ordering(346) 00:11:05.324 fused_ordering(347) 00:11:05.324 fused_ordering(348) 00:11:05.324 fused_ordering(349) 00:11:05.324 fused_ordering(350) 00:11:05.324 fused_ordering(351) 00:11:05.324 fused_ordering(352) 00:11:05.324 fused_ordering(353) 00:11:05.324 fused_ordering(354) 00:11:05.324 fused_ordering(355) 00:11:05.324 fused_ordering(356) 00:11:05.324 fused_ordering(357) 00:11:05.324 fused_ordering(358) 00:11:05.324 fused_ordering(359) 00:11:05.324 fused_ordering(360) 00:11:05.324 fused_ordering(361) 00:11:05.324 fused_ordering(362) 00:11:05.324 fused_ordering(363) 00:11:05.324 fused_ordering(364) 00:11:05.324 fused_ordering(365) 00:11:05.324 fused_ordering(366) 00:11:05.324 fused_ordering(367) 00:11:05.324 fused_ordering(368) 00:11:05.324 fused_ordering(369) 00:11:05.324 fused_ordering(370) 00:11:05.324 fused_ordering(371) 00:11:05.324 fused_ordering(372) 00:11:05.324 fused_ordering(373) 00:11:05.324 fused_ordering(374) 00:11:05.324 fused_ordering(375) 00:11:05.324 fused_ordering(376) 00:11:05.324 fused_ordering(377) 00:11:05.324 fused_ordering(378) 00:11:05.324 fused_ordering(379) 00:11:05.324 fused_ordering(380) 00:11:05.324 fused_ordering(381) 00:11:05.324 fused_ordering(382) 00:11:05.324 fused_ordering(383) 00:11:05.324 fused_ordering(384) 00:11:05.324 fused_ordering(385) 00:11:05.324 fused_ordering(386) 00:11:05.324 fused_ordering(387) 00:11:05.324 fused_ordering(388) 00:11:05.324 fused_ordering(389) 00:11:05.324 fused_ordering(390) 00:11:05.324 fused_ordering(391) 00:11:05.324 fused_ordering(392) 00:11:05.324 fused_ordering(393) 00:11:05.324 fused_ordering(394) 00:11:05.324 fused_ordering(395) 00:11:05.324 fused_ordering(396) 00:11:05.324 fused_ordering(397) 00:11:05.324 fused_ordering(398) 00:11:05.324 fused_ordering(399) 00:11:05.324 fused_ordering(400) 00:11:05.324 fused_ordering(401) 00:11:05.324 fused_ordering(402) 00:11:05.324 fused_ordering(403) 00:11:05.324 fused_ordering(404) 00:11:05.324 fused_ordering(405) 00:11:05.324 fused_ordering(406) 00:11:05.324 fused_ordering(407) 00:11:05.324 fused_ordering(408) 00:11:05.324 fused_ordering(409) 00:11:05.324 fused_ordering(410) 00:11:05.324 fused_ordering(411) 00:11:05.324 fused_ordering(412) 00:11:05.324 fused_ordering(413) 00:11:05.324 fused_ordering(414) 00:11:05.324 fused_ordering(415) 00:11:05.324 fused_ordering(416) 00:11:05.324 fused_ordering(417) 00:11:05.324 fused_ordering(418) 00:11:05.324 fused_ordering(419) 00:11:05.324 fused_ordering(420) 00:11:05.324 fused_ordering(421) 00:11:05.324 fused_ordering(422) 00:11:05.324 fused_ordering(423) 00:11:05.324 fused_ordering(424) 00:11:05.324 fused_ordering(425) 00:11:05.324 fused_ordering(426) 00:11:05.324 fused_ordering(427) 00:11:05.324 fused_ordering(428) 00:11:05.324 fused_ordering(429) 00:11:05.324 fused_ordering(430) 00:11:05.324 fused_ordering(431) 00:11:05.324 fused_ordering(432) 00:11:05.324 fused_ordering(433) 00:11:05.324 fused_ordering(434) 00:11:05.324 fused_ordering(435) 00:11:05.324 fused_ordering(436) 00:11:05.324 fused_ordering(437) 00:11:05.324 fused_ordering(438) 00:11:05.324 fused_ordering(439) 00:11:05.324 fused_ordering(440) 00:11:05.324 fused_ordering(441) 00:11:05.324 fused_ordering(442) 00:11:05.324 fused_ordering(443) 00:11:05.324 fused_ordering(444) 00:11:05.324 fused_ordering(445) 00:11:05.324 fused_ordering(446) 00:11:05.324 fused_ordering(447) 00:11:05.324 fused_ordering(448) 00:11:05.324 fused_ordering(449) 00:11:05.324 fused_ordering(450) 00:11:05.324 fused_ordering(451) 00:11:05.324 fused_ordering(452) 00:11:05.324 fused_ordering(453) 00:11:05.324 fused_ordering(454) 00:11:05.324 fused_ordering(455) 00:11:05.324 fused_ordering(456) 00:11:05.324 fused_ordering(457) 00:11:05.324 fused_ordering(458) 00:11:05.324 fused_ordering(459) 00:11:05.324 fused_ordering(460) 00:11:05.324 fused_ordering(461) 00:11:05.324 fused_ordering(462) 00:11:05.324 fused_ordering(463) 00:11:05.324 fused_ordering(464) 00:11:05.324 fused_ordering(465) 00:11:05.324 fused_ordering(466) 00:11:05.324 fused_ordering(467) 00:11:05.324 fused_ordering(468) 00:11:05.324 fused_ordering(469) 00:11:05.324 fused_ordering(470) 00:11:05.324 fused_ordering(471) 00:11:05.324 fused_ordering(472) 00:11:05.324 fused_ordering(473) 00:11:05.324 fused_ordering(474) 00:11:05.324 fused_ordering(475) 00:11:05.324 fused_ordering(476) 00:11:05.324 fused_ordering(477) 00:11:05.324 fused_ordering(478) 00:11:05.324 fused_ordering(479) 00:11:05.324 fused_ordering(480) 00:11:05.324 fused_ordering(481) 00:11:05.324 fused_ordering(482) 00:11:05.324 fused_ordering(483) 00:11:05.324 fused_ordering(484) 00:11:05.324 fused_ordering(485) 00:11:05.324 fused_ordering(486) 00:11:05.324 fused_ordering(487) 00:11:05.324 fused_ordering(488) 00:11:05.324 fused_ordering(489) 00:11:05.324 fused_ordering(490) 00:11:05.324 fused_ordering(491) 00:11:05.324 fused_ordering(492) 00:11:05.324 fused_ordering(493) 00:11:05.324 fused_ordering(494) 00:11:05.324 fused_ordering(495) 00:11:05.324 fused_ordering(496) 00:11:05.324 fused_ordering(497) 00:11:05.324 fused_ordering(498) 00:11:05.324 fused_ordering(499) 00:11:05.324 fused_ordering(500) 00:11:05.324 fused_ordering(501) 00:11:05.324 fused_ordering(502) 00:11:05.324 fused_ordering(503) 00:11:05.324 fused_ordering(504) 00:11:05.324 fused_ordering(505) 00:11:05.324 fused_ordering(506) 00:11:05.324 fused_ordering(507) 00:11:05.324 fused_ordering(508) 00:11:05.324 fused_ordering(509) 00:11:05.324 fused_ordering(510) 00:11:05.324 fused_ordering(511) 00:11:05.324 fused_ordering(512) 00:11:05.324 fused_ordering(513) 00:11:05.324 fused_ordering(514) 00:11:05.324 fused_ordering(515) 00:11:05.324 fused_ordering(516) 00:11:05.324 fused_ordering(517) 00:11:05.324 fused_ordering(518) 00:11:05.324 fused_ordering(519) 00:11:05.324 fused_ordering(520) 00:11:05.324 fused_ordering(521) 00:11:05.324 fused_ordering(522) 00:11:05.324 fused_ordering(523) 00:11:05.324 fused_ordering(524) 00:11:05.324 fused_ordering(525) 00:11:05.324 fused_ordering(526) 00:11:05.324 fused_ordering(527) 00:11:05.324 fused_ordering(528) 00:11:05.324 fused_ordering(529) 00:11:05.324 fused_ordering(530) 00:11:05.324 fused_ordering(531) 00:11:05.324 fused_ordering(532) 00:11:05.324 fused_ordering(533) 00:11:05.324 fused_ordering(534) 00:11:05.324 fused_ordering(535) 00:11:05.324 fused_ordering(536) 00:11:05.325 fused_ordering(537) 00:11:05.325 fused_ordering(538) 00:11:05.325 fused_ordering(539) 00:11:05.325 fused_ordering(540) 00:11:05.325 fused_ordering(541) 00:11:05.325 fused_ordering(542) 00:11:05.325 fused_ordering(543) 00:11:05.325 fused_ordering(544) 00:11:05.325 fused_ordering(545) 00:11:05.325 fused_ordering(546) 00:11:05.325 fused_ordering(547) 00:11:05.325 fused_ordering(548) 00:11:05.325 fused_ordering(549) 00:11:05.325 fused_ordering(550) 00:11:05.325 fused_ordering(551) 00:11:05.325 fused_ordering(552) 00:11:05.325 fused_ordering(553) 00:11:05.325 fused_ordering(554) 00:11:05.325 fused_ordering(555) 00:11:05.325 fused_ordering(556) 00:11:05.325 fused_ordering(557) 00:11:05.325 fused_ordering(558) 00:11:05.325 fused_ordering(559) 00:11:05.325 fused_ordering(560) 00:11:05.325 fused_ordering(561) 00:11:05.325 fused_ordering(562) 00:11:05.325 fused_ordering(563) 00:11:05.325 fused_ordering(564) 00:11:05.325 fused_ordering(565) 00:11:05.325 fused_ordering(566) 00:11:05.325 fused_ordering(567) 00:11:05.325 fused_ordering(568) 00:11:05.325 fused_ordering(569) 00:11:05.325 fused_ordering(570) 00:11:05.325 fused_ordering(571) 00:11:05.325 fused_ordering(572) 00:11:05.325 fused_ordering(573) 00:11:05.325 fused_ordering(574) 00:11:05.325 fused_ordering(575) 00:11:05.325 fused_ordering(576) 00:11:05.325 fused_ordering(577) 00:11:05.325 fused_ordering(578) 00:11:05.325 fused_ordering(579) 00:11:05.325 fused_ordering(580) 00:11:05.325 fused_ordering(581) 00:11:05.325 fused_ordering(582) 00:11:05.325 fused_ordering(583) 00:11:05.325 fused_ordering(584) 00:11:05.325 fused_ordering(585) 00:11:05.325 fused_ordering(586) 00:11:05.325 fused_ordering(587) 00:11:05.325 fused_ordering(588) 00:11:05.325 fused_ordering(589) 00:11:05.325 fused_ordering(590) 00:11:05.325 fused_ordering(591) 00:11:05.325 fused_ordering(592) 00:11:05.325 fused_ordering(593) 00:11:05.325 fused_ordering(594) 00:11:05.325 fused_ordering(595) 00:11:05.325 fused_ordering(596) 00:11:05.325 fused_ordering(597) 00:11:05.325 fused_ordering(598) 00:11:05.325 fused_ordering(599) 00:11:05.325 fused_ordering(600) 00:11:05.325 fused_ordering(601) 00:11:05.325 fused_ordering(602) 00:11:05.325 fused_ordering(603) 00:11:05.325 fused_ordering(604) 00:11:05.325 fused_ordering(605) 00:11:05.325 fused_ordering(606) 00:11:05.325 fused_ordering(607) 00:11:05.325 fused_ordering(608) 00:11:05.325 fused_ordering(609) 00:11:05.325 fused_ordering(610) 00:11:05.325 fused_ordering(611) 00:11:05.325 fused_ordering(612) 00:11:05.325 fused_ordering(613) 00:11:05.325 fused_ordering(614) 00:11:05.325 fused_ordering(615) 00:11:05.583 fused_ordering(616) 00:11:05.583 fused_ordering(617) 00:11:05.583 fused_ordering(618) 00:11:05.583 fused_ordering(619) 00:11:05.583 fused_ordering(620) 00:11:05.583 fused_ordering(621) 00:11:05.583 fused_ordering(622) 00:11:05.583 fused_ordering(623) 00:11:05.583 fused_ordering(624) 00:11:05.583 fused_ordering(625) 00:11:05.583 fused_ordering(626) 00:11:05.583 fused_ordering(627) 00:11:05.583 fused_ordering(628) 00:11:05.583 fused_ordering(629) 00:11:05.583 fused_ordering(630) 00:11:05.583 fused_ordering(631) 00:11:05.583 fused_ordering(632) 00:11:05.583 fused_ordering(633) 00:11:05.583 fused_ordering(634) 00:11:05.583 fused_ordering(635) 00:11:05.583 fused_ordering(636) 00:11:05.583 fused_ordering(637) 00:11:05.583 fused_ordering(638) 00:11:05.583 fused_ordering(639) 00:11:05.583 fused_ordering(640) 00:11:05.583 fused_ordering(641) 00:11:05.583 fused_ordering(642) 00:11:05.583 fused_ordering(643) 00:11:05.583 fused_ordering(644) 00:11:05.583 fused_ordering(645) 00:11:05.583 fused_ordering(646) 00:11:05.583 fused_ordering(647) 00:11:05.583 fused_ordering(648) 00:11:05.583 fused_ordering(649) 00:11:05.583 fused_ordering(650) 00:11:05.583 fused_ordering(651) 00:11:05.583 fused_ordering(652) 00:11:05.583 fused_ordering(653) 00:11:05.583 fused_ordering(654) 00:11:05.583 fused_ordering(655) 00:11:05.583 fused_ordering(656) 00:11:05.583 fused_ordering(657) 00:11:05.583 fused_ordering(658) 00:11:05.583 fused_ordering(659) 00:11:05.583 fused_ordering(660) 00:11:05.583 fused_ordering(661) 00:11:05.583 fused_ordering(662) 00:11:05.583 fused_ordering(663) 00:11:05.583 fused_ordering(664) 00:11:05.583 fused_ordering(665) 00:11:05.583 fused_ordering(666) 00:11:05.583 fused_ordering(667) 00:11:05.583 fused_ordering(668) 00:11:05.583 fused_ordering(669) 00:11:05.583 fused_ordering(670) 00:11:05.583 fused_ordering(671) 00:11:05.583 fused_ordering(672) 00:11:05.583 fused_ordering(673) 00:11:05.583 fused_ordering(674) 00:11:05.583 fused_ordering(675) 00:11:05.583 fused_ordering(676) 00:11:05.583 fused_ordering(677) 00:11:05.583 fused_ordering(678) 00:11:05.583 fused_ordering(679) 00:11:05.583 fused_ordering(680) 00:11:05.583 fused_ordering(681) 00:11:05.583 fused_ordering(682) 00:11:05.583 fused_ordering(683) 00:11:05.583 fused_ordering(684) 00:11:05.583 fused_ordering(685) 00:11:05.583 fused_ordering(686) 00:11:05.583 fused_ordering(687) 00:11:05.583 fused_ordering(688) 00:11:05.583 fused_ordering(689) 00:11:05.583 fused_ordering(690) 00:11:05.583 fused_ordering(691) 00:11:05.583 fused_ordering(692) 00:11:05.583 fused_ordering(693) 00:11:05.583 fused_ordering(694) 00:11:05.583 fused_ordering(695) 00:11:05.583 fused_ordering(696) 00:11:05.583 fused_ordering(697) 00:11:05.583 fused_ordering(698) 00:11:05.583 fused_ordering(699) 00:11:05.583 fused_ordering(700) 00:11:05.583 fused_ordering(701) 00:11:05.583 fused_ordering(702) 00:11:05.583 fused_ordering(703) 00:11:05.583 fused_ordering(704) 00:11:05.583 fused_ordering(705) 00:11:05.583 fused_ordering(706) 00:11:05.583 fused_ordering(707) 00:11:05.583 fused_ordering(708) 00:11:05.583 fused_ordering(709) 00:11:05.583 fused_ordering(710) 00:11:05.583 fused_ordering(711) 00:11:05.583 fused_ordering(712) 00:11:05.583 fused_ordering(713) 00:11:05.583 fused_ordering(714) 00:11:05.583 fused_ordering(715) 00:11:05.583 fused_ordering(716) 00:11:05.583 fused_ordering(717) 00:11:05.583 fused_ordering(718) 00:11:05.583 fused_ordering(719) 00:11:05.583 fused_ordering(720) 00:11:05.583 fused_ordering(721) 00:11:05.583 fused_ordering(722) 00:11:05.583 fused_ordering(723) 00:11:05.583 fused_ordering(724) 00:11:05.583 fused_ordering(725) 00:11:05.583 fused_ordering(726) 00:11:05.583 fused_ordering(727) 00:11:05.583 fused_ordering(728) 00:11:05.583 fused_ordering(729) 00:11:05.583 fused_ordering(730) 00:11:05.583 fused_ordering(731) 00:11:05.583 fused_ordering(732) 00:11:05.583 fused_ordering(733) 00:11:05.583 fused_ordering(734) 00:11:05.583 fused_ordering(735) 00:11:05.583 fused_ordering(736) 00:11:05.583 fused_ordering(737) 00:11:05.583 fused_ordering(738) 00:11:05.583 fused_ordering(739) 00:11:05.583 fused_ordering(740) 00:11:05.583 fused_ordering(741) 00:11:05.583 fused_ordering(742) 00:11:05.583 fused_ordering(743) 00:11:05.583 fused_ordering(744) 00:11:05.583 fused_ordering(745) 00:11:05.583 fused_ordering(746) 00:11:05.583 fused_ordering(747) 00:11:05.583 fused_ordering(748) 00:11:05.583 fused_ordering(749) 00:11:05.583 fused_ordering(750) 00:11:05.583 fused_ordering(751) 00:11:05.583 fused_ordering(752) 00:11:05.583 fused_ordering(753) 00:11:05.583 fused_ordering(754) 00:11:05.583 fused_ordering(755) 00:11:05.583 fused_ordering(756) 00:11:05.583 fused_ordering(757) 00:11:05.583 fused_ordering(758) 00:11:05.583 fused_ordering(759) 00:11:05.583 fused_ordering(760) 00:11:05.583 fused_ordering(761) 00:11:05.583 fused_ordering(762) 00:11:05.583 fused_ordering(763) 00:11:05.583 fused_ordering(764) 00:11:05.583 fused_ordering(765) 00:11:05.583 fused_ordering(766) 00:11:05.583 fused_ordering(767) 00:11:05.583 fused_ordering(768) 00:11:05.583 fused_ordering(769) 00:11:05.583 fused_ordering(770) 00:11:05.583 fused_ordering(771) 00:11:05.583 fused_ordering(772) 00:11:05.583 fused_ordering(773) 00:11:05.583 fused_ordering(774) 00:11:05.583 fused_ordering(775) 00:11:05.583 fused_ordering(776) 00:11:05.583 fused_ordering(777) 00:11:05.583 fused_ordering(778) 00:11:05.583 fused_ordering(779) 00:11:05.583 fused_ordering(780) 00:11:05.583 fused_ordering(781) 00:11:05.583 fused_ordering(782) 00:11:05.583 fused_ordering(783) 00:11:05.583 fused_ordering(784) 00:11:05.583 fused_ordering(785) 00:11:05.583 fused_ordering(786) 00:11:05.583 fused_ordering(787) 00:11:05.583 fused_ordering(788) 00:11:05.583 fused_ordering(789) 00:11:05.583 fused_ordering(790) 00:11:05.583 fused_ordering(791) 00:11:05.583 fused_ordering(792) 00:11:05.583 fused_ordering(793) 00:11:05.583 fused_ordering(794) 00:11:05.583 fused_ordering(795) 00:11:05.583 fused_ordering(796) 00:11:05.583 fused_ordering(797) 00:11:05.583 fused_ordering(798) 00:11:05.583 fused_ordering(799) 00:11:05.583 fused_ordering(800) 00:11:05.583 fused_ordering(801) 00:11:05.583 fused_ordering(802) 00:11:05.583 fused_ordering(803) 00:11:05.583 fused_ordering(804) 00:11:05.583 fused_ordering(805) 00:11:05.583 fused_ordering(806) 00:11:05.583 fused_ordering(807) 00:11:05.583 fused_ordering(808) 00:11:05.583 fused_ordering(809) 00:11:05.583 fused_ordering(810) 00:11:05.583 fused_ordering(811) 00:11:05.583 fused_ordering(812) 00:11:05.583 fused_ordering(813) 00:11:05.583 fused_ordering(814) 00:11:05.583 fused_ordering(815) 00:11:05.583 fused_ordering(816) 00:11:05.583 fused_ordering(817) 00:11:05.583 fused_ordering(818) 00:11:05.583 fused_ordering(819) 00:11:05.583 fused_ordering(820) 00:11:05.842 fused_ordering(821) 00:11:05.842 fused_ordering(822) 00:11:05.842 fused_ordering(823) 00:11:05.842 fused_ordering(824) 00:11:05.842 fused_ordering(825) 00:11:05.842 fused_ordering(826) 00:11:05.842 fused_ordering(827) 00:11:05.842 fused_ordering(828) 00:11:05.842 fused_ordering(829) 00:11:05.842 fused_ordering(830) 00:11:05.842 fused_ordering(831) 00:11:05.842 fused_ordering(832) 00:11:05.842 fused_ordering(833) 00:11:05.842 fused_ordering(834) 00:11:05.842 fused_ordering(835) 00:11:05.842 fused_ordering(836) 00:11:05.842 fused_ordering(837) 00:11:05.842 fused_ordering(838) 00:11:05.842 fused_ordering(839) 00:11:05.842 fused_ordering(840) 00:11:05.842 fused_ordering(841) 00:11:05.842 fused_ordering(842) 00:11:05.842 fused_ordering(843) 00:11:05.842 fused_ordering(844) 00:11:05.842 fused_ordering(845) 00:11:05.842 fused_ordering(846) 00:11:05.842 fused_ordering(847) 00:11:05.842 fused_ordering(848) 00:11:05.842 fused_ordering(849) 00:11:05.842 fused_ordering(850) 00:11:05.842 fused_ordering(851) 00:11:05.842 fused_ordering(852) 00:11:05.842 fused_ordering(853) 00:11:05.842 fused_ordering(854) 00:11:05.842 fused_ordering(855) 00:11:05.842 fused_ordering(856) 00:11:05.842 fused_ordering(857) 00:11:05.842 fused_ordering(858) 00:11:05.842 fused_ordering(859) 00:11:05.842 fused_ordering(860) 00:11:05.842 fused_ordering(861) 00:11:05.842 fused_ordering(862) 00:11:05.842 fused_ordering(863) 00:11:05.842 fused_ordering(864) 00:11:05.842 fused_ordering(865) 00:11:05.842 fused_ordering(866) 00:11:05.842 fused_ordering(867) 00:11:05.842 fused_ordering(868) 00:11:05.842 fused_ordering(869) 00:11:05.842 fused_ordering(870) 00:11:05.842 fused_ordering(871) 00:11:05.842 fused_ordering(872) 00:11:05.842 fused_ordering(873) 00:11:05.842 fused_ordering(874) 00:11:05.842 fused_ordering(875) 00:11:05.842 fused_ordering(876) 00:11:05.842 fused_ordering(877) 00:11:05.842 fused_ordering(878) 00:11:05.842 fused_ordering(879) 00:11:05.842 fused_ordering(880) 00:11:05.842 fused_ordering(881) 00:11:05.842 fused_ordering(882) 00:11:05.842 fused_ordering(883) 00:11:05.842 fused_ordering(884) 00:11:05.842 fused_ordering(885) 00:11:05.842 fused_ordering(886) 00:11:05.842 fused_ordering(887) 00:11:05.842 fused_ordering(888) 00:11:05.842 fused_ordering(889) 00:11:05.842 fused_ordering(890) 00:11:05.842 fused_ordering(891) 00:11:05.842 fused_ordering(892) 00:11:05.842 fused_ordering(893) 00:11:05.842 fused_ordering(894) 00:11:05.842 fused_ordering(895) 00:11:05.842 fused_ordering(896) 00:11:05.842 fused_ordering(897) 00:11:05.842 fused_ordering(898) 00:11:05.842 fused_ordering(899) 00:11:05.842 fused_ordering(900) 00:11:05.842 fused_ordering(901) 00:11:05.842 fused_ordering(902) 00:11:05.842 fused_ordering(903) 00:11:05.842 fused_ordering(904) 00:11:05.842 fused_ordering(905) 00:11:05.842 fused_ordering(906) 00:11:05.842 fused_ordering(907) 00:11:05.842 fused_ordering(908) 00:11:05.842 fused_ordering(909) 00:11:05.842 fused_ordering(910) 00:11:05.842 fused_ordering(911) 00:11:05.842 fused_ordering(912) 00:11:05.842 fused_ordering(913) 00:11:05.842 fused_ordering(914) 00:11:05.842 fused_ordering(915) 00:11:05.842 fused_ordering(916) 00:11:05.842 fused_ordering(917) 00:11:05.842 fused_ordering(918) 00:11:05.842 fused_ordering(919) 00:11:05.842 fused_ordering(920) 00:11:05.842 fused_ordering(921) 00:11:05.842 fused_ordering(922) 00:11:05.842 fused_ordering(923) 00:11:05.842 fused_ordering(924) 00:11:05.842 fused_ordering(925) 00:11:05.842 fused_ordering(926) 00:11:05.842 fused_ordering(927) 00:11:05.842 fused_ordering(928) 00:11:05.842 fused_ordering(929) 00:11:05.842 fused_ordering(930) 00:11:05.842 fused_ordering(931) 00:11:05.842 fused_ordering(932) 00:11:05.842 fused_ordering(933) 00:11:05.842 fused_ordering(934) 00:11:05.842 fused_ordering(935) 00:11:05.842 fused_ordering(936) 00:11:05.842 fused_ordering(937) 00:11:05.842 fused_ordering(938) 00:11:05.842 fused_ordering(939) 00:11:05.842 fused_ordering(940) 00:11:05.842 fused_ordering(941) 00:11:05.842 fused_ordering(942) 00:11:05.842 fused_ordering(943) 00:11:05.842 fused_ordering(944) 00:11:05.842 fused_ordering(945) 00:11:05.842 fused_ordering(946) 00:11:05.842 fused_ordering(947) 00:11:05.842 fused_ordering(948) 00:11:05.842 fused_ordering(949) 00:11:05.842 fused_ordering(950) 00:11:05.842 fused_ordering(951) 00:11:05.842 fused_ordering(952) 00:11:05.842 fused_ordering(953) 00:11:05.842 fused_ordering(954) 00:11:05.842 fused_ordering(955) 00:11:05.842 fused_ordering(956) 00:11:05.842 fused_ordering(957) 00:11:05.842 fused_ordering(958) 00:11:05.842 fused_ordering(959) 00:11:05.842 fused_ordering(960) 00:11:05.842 fused_ordering(961) 00:11:05.842 fused_ordering(962) 00:11:05.842 fused_ordering(963) 00:11:05.842 fused_ordering(964) 00:11:05.842 fused_ordering(965) 00:11:05.842 fused_ordering(966) 00:11:05.842 fused_ordering(967) 00:11:05.842 fused_ordering(968) 00:11:05.842 fused_ordering(969) 00:11:05.842 fused_ordering(970) 00:11:05.842 fused_ordering(971) 00:11:05.842 fused_ordering(972) 00:11:05.842 fused_ordering(973) 00:11:05.842 fused_ordering(974) 00:11:05.842 fused_ordering(975) 00:11:05.842 fused_ordering(976) 00:11:05.842 fused_ordering(977) 00:11:05.842 fused_ordering(978) 00:11:05.842 fused_ordering(979) 00:11:05.842 fused_ordering(980) 00:11:05.842 fused_ordering(981) 00:11:05.842 fused_ordering(982) 00:11:05.842 fused_ordering(983) 00:11:05.842 fused_ordering(984) 00:11:05.842 fused_ordering(985) 00:11:05.842 fused_ordering(986) 00:11:05.842 fused_ordering(987) 00:11:05.842 fused_ordering(988) 00:11:05.842 fused_ordering(989) 00:11:05.842 fused_ordering(990) 00:11:05.842 fused_ordering(991) 00:11:05.842 fused_ordering(992) 00:11:05.842 fused_ordering(993) 00:11:05.842 fused_ordering(994) 00:11:05.842 fused_ordering(995) 00:11:05.842 fused_ordering(996) 00:11:05.842 fused_ordering(997) 00:11:05.843 fused_ordering(998) 00:11:05.843 fused_ordering(999) 00:11:05.843 fused_ordering(1000) 00:11:05.843 fused_ordering(1001) 00:11:05.843 fused_ordering(1002) 00:11:05.843 fused_ordering(1003) 00:11:05.843 fused_ordering(1004) 00:11:05.843 fused_ordering(1005) 00:11:05.843 fused_ordering(1006) 00:11:05.843 fused_ordering(1007) 00:11:05.843 fused_ordering(1008) 00:11:05.843 fused_ordering(1009) 00:11:05.843 fused_ordering(1010) 00:11:05.843 fused_ordering(1011) 00:11:05.843 fused_ordering(1012) 00:11:05.843 fused_ordering(1013) 00:11:05.843 fused_ordering(1014) 00:11:05.843 fused_ordering(1015) 00:11:05.843 fused_ordering(1016) 00:11:05.843 fused_ordering(1017) 00:11:05.843 fused_ordering(1018) 00:11:05.843 fused_ordering(1019) 00:11:05.843 fused_ordering(1020) 00:11:05.843 fused_ordering(1021) 00:11:05.843 fused_ordering(1022) 00:11:05.843 fused_ordering(1023) 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:05.843 rmmod nvme_rdma 00:11:05.843 rmmod nvme_fabrics 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1377524 ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1377524 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@942 -- # '[' -z 1377524 ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # kill -0 1377524 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # uname 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1377524 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1377524' 00:11:05.843 killing process with pid 1377524 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@961 -- # kill 1377524 00:11:05.843 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # wait 1377524 00:11:06.102 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.102 23:36:54 nvmf_rdma.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:06.102 00:11:06.102 real 0m7.401s 00:11:06.102 user 0m4.260s 00:11:06.102 sys 0m4.366s 00:11:06.102 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:06.102 23:36:54 nvmf_rdma.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:06.102 ************************************ 00:11:06.102 END TEST nvmf_fused_ordering 00:11:06.102 ************************************ 00:11:06.102 23:36:54 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:11:06.102 23:36:54 nvmf_rdma -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:06.102 23:36:54 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:06.102 23:36:54 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:06.102 23:36:54 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:06.102 ************************************ 00:11:06.102 START TEST nvmf_delete_subsystem 00:11:06.102 ************************************ 00:11:06.102 23:36:54 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:11:06.102 * Looking for test storage... 00:11:06.102 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.102 23:36:55 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:11.372 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:11.372 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:11.372 Found net devices under 0000:da:00.0: mlx_0_0 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:11.372 Found net devices under 0000:da:00.1: mlx_0_1 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.372 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:11.373 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.373 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:11.373 altname enp218s0f0np0 00:11:11.373 altname ens818f0np0 00:11:11.373 inet 192.168.100.8/24 scope global mlx_0_0 00:11:11.373 valid_lft forever preferred_lft forever 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:11.373 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:11.373 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:11.373 altname enp218s0f1np1 00:11:11.373 altname ens818f1np1 00:11:11.373 inet 192.168.100.9/24 scope global mlx_0_1 00:11:11.373 valid_lft forever preferred_lft forever 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:11.373 192.168.100.9' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:11.373 192.168.100.9' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:11.373 192.168.100.9' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1380834 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1380834 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@823 -- # '[' -z 1380834 ']' 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:11.373 23:37:00 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.373 [2024-07-15 23:37:00.304416] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:11.373 [2024-07-15 23:37:00.304465] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.632 [2024-07-15 23:37:00.363706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:11.632 [2024-07-15 23:37:00.440948] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.632 [2024-07-15 23:37:00.440989] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.632 [2024-07-15 23:37:00.440995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.632 [2024-07-15 23:37:00.441001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.632 [2024-07-15 23:37:00.441008] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.632 [2024-07-15 23:37:00.441051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.632 [2024-07-15 23:37:00.441054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # return 0 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.198 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.198 [2024-07-15 23:37:01.156383] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10b83c0/0x10bc8b0) succeed. 00:11:12.198 [2024-07-15 23:37:01.165249] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10b9870/0x10fdf40) succeed. 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 [2024-07-15 23:37:01.254634] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 NULL1 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 Delay0 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1381081 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:12.455 23:37:01 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:12.455 [2024-07-15 23:37:01.351483] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:14.352 23:37:03 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.352 23:37:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:14.352 23:37:03 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 NVMe io qpair process completion error 00:11:15.720 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:15.720 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:15.720 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1381081 00:11:15.720 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:15.977 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:15.977 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1381081 00:11:15.977 23:37:04 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:16.541 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Write completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.542 starting I/O failed: -6 00:11:16.542 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 starting I/O failed: -6 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Write completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Read completed with error (sct=0, sc=8) 00:11:16.543 Initializing NVMe Controllers 00:11:16.543 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:16.543 Controller IO queue size 128, less than required. 00:11:16.543 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:16.543 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:16.543 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:16.543 Initialization complete. Launching workers. 00:11:16.543 ======================================================== 00:11:16.543 Latency(us) 00:11:16.543 Device Information : IOPS MiB/s Average min max 00:11:16.543 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.50 0.04 1593442.43 1000136.48 2975073.82 00:11:16.543 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.50 0.04 1594905.65 1001250.28 2976338.11 00:11:16.543 ======================================================== 00:11:16.543 Total : 161.01 0.08 1594174.04 1000136.48 2976338.11 00:11:16.543 00:11:16.543 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:16.543 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1381081 00:11:16.543 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:16.543 [2024-07-15 23:37:05.450052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:16.543 [2024-07-15 23:37:05.450093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:16.543 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1381081 00:11:17.107 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1381081) - No such process 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1381081 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # local es=0 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # valid_exec_arg wait 1381081 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@630 -- # local arg=wait 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # type -t wait 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # wait 1381081 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@645 -- # es=1 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.107 [2024-07-15 23:37:05.968219] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1381784 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:17.107 23:37:05 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:17.107 [2024-07-15 23:37:06.056249] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:17.672 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:17.672 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:17.672 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:18.236 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:18.236 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:18.236 23:37:06 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:18.800 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:18.800 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:18.800 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.059 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.059 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:19.059 23:37:07 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:19.625 23:37:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:19.625 23:37:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:19.625 23:37:08 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.192 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.192 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:20.192 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:20.790 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:20.790 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:20.790 23:37:09 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.049 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.049 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:21.049 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:21.617 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:21.617 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:21.617 23:37:10 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.184 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.184 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:22.184 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:22.749 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:22.749 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:22.749 23:37:11 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.313 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.313 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:23.313 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:23.570 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:23.570 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:23.570 23:37:12 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.136 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.136 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:24.136 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:24.438 Initializing NVMe Controllers 00:11:24.438 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:24.438 Controller IO queue size 128, less than required. 00:11:24.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:24.438 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:24.438 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:24.438 Initialization complete. Launching workers. 00:11:24.438 ======================================================== 00:11:24.438 Latency(us) 00:11:24.438 Device Information : IOPS MiB/s Average min max 00:11:24.438 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001397.13 1000057.37 1004949.82 00:11:24.438 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002186.06 1000119.86 1006170.17 00:11:24.438 ======================================================== 00:11:24.438 Total : 256.00 0.12 1001791.60 1000057.37 1006170.17 00:11:24.438 00:11:24.707 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:24.707 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1381784 00:11:24.708 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1381784) - No such process 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1381784 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:24.708 rmmod nvme_rdma 00:11:24.708 rmmod nvme_fabrics 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1380834 ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1380834 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@942 -- # '[' -z 1380834 ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # kill -0 1380834 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # uname 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1380834 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1380834' 00:11:24.708 killing process with pid 1380834 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@961 -- # kill 1380834 00:11:24.708 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # wait 1380834 00:11:24.967 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.967 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:24.967 00:11:24.967 real 0m18.924s 00:11:24.967 user 0m49.534s 00:11:24.967 sys 0m5.047s 00:11:24.967 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:24.967 23:37:13 nvmf_rdma.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:24.967 ************************************ 00:11:24.967 END TEST nvmf_delete_subsystem 00:11:24.967 ************************************ 00:11:24.967 23:37:13 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:11:24.967 23:37:13 nvmf_rdma -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:11:24.967 23:37:13 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:24.967 23:37:13 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:24.967 23:37:13 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:24.967 ************************************ 00:11:24.967 START TEST nvmf_ns_masking 00:11:24.967 ************************************ 00:11:24.967 23:37:13 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1117 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:11:25.226 * Looking for test storage... 00:11:25.226 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.226 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=0fbe8723-8049-432f-978d-f85b13fa37ca 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=dc7945eb-979f-4d5c-9975-83ab5e314c36 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a27164d0-5ddb-44f9-847f-421d7e85f956 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.227 23:37:14 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:30.492 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:30.493 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:30.493 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:30.493 Found net devices under 0000:da:00.0: mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:30.493 Found net devices under 0000:da:00.1: mlx_0_1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:30.493 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:30.493 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:30.493 altname enp218s0f0np0 00:11:30.493 altname ens818f0np0 00:11:30.493 inet 192.168.100.8/24 scope global mlx_0_0 00:11:30.493 valid_lft forever preferred_lft forever 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:30.493 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:30.493 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:30.493 altname enp218s0f1np1 00:11:30.493 altname ens818f1np1 00:11:30.493 inet 192.168.100.9/24 scope global mlx_0_1 00:11:30.493 valid_lft forever preferred_lft forever 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:30.493 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:30.494 192.168.100.9' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:30.494 192.168.100.9' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:30.494 192.168.100.9' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1386233 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1386233 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1386233 ']' 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:30.494 23:37:19 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:30.494 [2024-07-15 23:37:19.330681] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:30.494 [2024-07-15 23:37:19.330724] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.494 [2024-07-15 23:37:19.387041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.494 [2024-07-15 23:37:19.462693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.494 [2024-07-15 23:37:19.462732] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.494 [2024-07-15 23:37:19.462740] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.494 [2024-07-15 23:37:19.462746] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.494 [2024-07-15 23:37:19.462750] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.494 [2024-07-15 23:37:19.462774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.430 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:31.430 [2024-07-15 23:37:20.343640] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x178b910/0x178fe00) succeed. 00:11:31.430 [2024-07-15 23:37:20.352444] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x178ce10/0x17d1490) succeed. 00:11:31.688 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:31.688 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:31.688 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:31.688 Malloc1 00:11:31.688 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:31.947 Malloc2 00:11:31.947 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:32.205 23:37:20 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:32.205 23:37:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:32.464 [2024-07-15 23:37:21.250964] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:32.464 23:37:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:32.464 23:37:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a27164d0-5ddb-44f9-847f-421d7e85f956 -a 192.168.100.8 -s 4420 -i 4 00:11:32.722 23:37:21 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.722 23:37:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:32.722 23:37:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.722 23:37:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:11:32.722 23:37:21 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:34.623 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.881 [ 0]:0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61ea7cd78e1d4e779cde044f9d9b1a9a 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61ea7cd78e1d4e779cde044f9d9b1a9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:34.881 [ 0]:0x1 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:34.881 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61ea7cd78e1d4e779cde044f9d9b1a9a 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61ea7cd78e1d4e779cde044f9d9b1a9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:35.138 [ 1]:0x2 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:35.138 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:35.139 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:35.139 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:35.139 23:37:23 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.396 23:37:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.654 23:37:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:35.913 23:37:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:35.913 23:37:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a27164d0-5ddb-44f9-847f-421d7e85f956 -a 192.168.100.8 -s 4420 -i 4 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 1 ]] 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=1 00:11:36.171 23:37:24 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:38.070 23:37:26 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.070 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.327 [ 0]:0x2 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.327 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.583 [ 0]:0x1 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61ea7cd78e1d4e779cde044f9d9b1a9a 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61ea7cd78e1d4e779cde044f9d9b1a9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.583 [ 1]:0x2 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.583 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:38.841 [ 0]:0x2 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:38.841 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:39.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.099 23:37:27 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:39.356 23:37:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:39.356 23:37:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a27164d0-5ddb-44f9-847f-421d7e85f956 -a 192.168.100.8 -s 4420 -i 4 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1192 -- # local i=0 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:11:39.614 23:37:28 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # sleep 2 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # return 0 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:41.511 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:41.769 [ 0]:0x1 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=61ea7cd78e1d4e779cde044f9d9b1a9a 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 61ea7cd78e1d4e779cde044f9d9b1a9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:41.769 [ 1]:0x2 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:41.769 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:42.027 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.028 [ 0]:0x2 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:11:42.028 23:37:30 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:42.286 [2024-07-15 23:37:31.050012] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:42.286 request: 00:11:42.286 { 00:11:42.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.286 "nsid": 2, 00:11:42.286 "host": "nqn.2016-06.io.spdk:host1", 00:11:42.286 "method": "nvmf_ns_remove_host", 00:11:42.286 "req_id": 1 00:11:42.286 } 00:11:42.286 Got JSON-RPC error response 00:11:42.286 response: 00:11:42.286 { 00:11:42.286 "code": -32602, 00:11:42.286 "message": "Invalid parameters" 00:11:42.286 } 00:11:42.286 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:42.286 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:42.286 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:42.286 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:42.286 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@642 -- # local es=0 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@644 -- # valid_exec_arg ns_is_visible 0x1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@630 -- # local arg=ns_is_visible 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # type -t ns_is_visible 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # ns_is_visible 0x1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@645 -- # es=1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:42.287 [ 0]:0x2 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7250ac824bcf4d4582fc7da821d5bdd1 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7250ac824bcf4d4582fc7da821d5bdd1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:42.287 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1388472 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1388472 /var/tmp/host.sock 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@823 -- # '[' -z 1388472 ']' 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:42.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:42.545 23:37:31 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:42.822 [2024-07-15 23:37:31.550039] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:42.822 [2024-07-15 23:37:31.550083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388472 ] 00:11:42.822 [2024-07-15 23:37:31.603487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.822 [2024-07-15 23:37:31.679510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.389 23:37:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:43.389 23:37:32 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@856 -- # return 0 00:11:43.389 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.647 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 0fbe8723-8049-432f-978d-f85b13fa37ca 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 0FBE87238049432F978DF85B13FA37CA -i 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid dc7945eb-979f-4d5c-9975-83ab5e314c36 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:43.905 23:37:32 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g DC7945EB979F4D5C997583AB5E314C36 -i 00:11:44.163 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.421 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:44.421 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.421 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:44.678 nvme0n1 00:11:44.678 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:44.678 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:44.934 nvme1n2 00:11:44.934 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:44.934 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:44.934 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:44.934 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:44.934 23:37:33 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:45.192 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:45.192 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:45.192 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:45.192 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 0fbe8723-8049-432f-978d-f85b13fa37ca == \0\f\b\e\8\7\2\3\-\8\0\4\9\-\4\3\2\f\-\9\7\8\d\-\f\8\5\b\1\3\f\a\3\7\c\a ]] 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ dc7945eb-979f-4d5c-9975-83ab5e314c36 == \d\c\7\9\4\5\e\b\-\9\7\9\f\-\4\d\5\c\-\9\9\7\5\-\8\3\a\b\5\e\3\1\4\c\3\6 ]] 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1388472 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1388472 ']' 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1388472 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1388472 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1388472' 00:11:45.450 killing process with pid 1388472 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1388472 00:11:45.450 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1388472 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:46.017 rmmod nvme_rdma 00:11:46.017 rmmod nvme_fabrics 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1386233 ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1386233 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@942 -- # '[' -z 1386233 ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@946 -- # kill -0 1386233 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@947 -- # uname 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1386233 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1386233' 00:11:46.017 killing process with pid 1386233 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@961 -- # kill 1386233 00:11:46.017 23:37:34 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@966 -- # wait 1386233 00:11:46.275 23:37:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.275 23:37:35 nvmf_rdma.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:46.275 00:11:46.275 real 0m21.315s 00:11:46.275 user 0m25.162s 00:11:46.275 sys 0m5.809s 00:11:46.275 23:37:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:46.275 23:37:35 nvmf_rdma.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:46.275 ************************************ 00:11:46.275 END TEST nvmf_ns_masking 00:11:46.275 ************************************ 00:11:46.535 23:37:35 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:11:46.535 23:37:35 nvmf_rdma -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:11:46.535 23:37:35 nvmf_rdma -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:46.535 23:37:35 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:46.535 23:37:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:46.535 23:37:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:46.535 ************************************ 00:11:46.535 START TEST nvmf_nvme_cli 00:11:46.535 ************************************ 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:11:46.535 * Looking for test storage... 00:11:46.535 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.535 23:37:35 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.801 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:51.802 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:51.802 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:51.802 Found net devices under 0000:da:00.0: mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:51.802 Found net devices under 0000:da:00.1: mlx_0_1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:51.802 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.802 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:11:51.802 altname enp218s0f0np0 00:11:51.802 altname ens818f0np0 00:11:51.802 inet 192.168.100.8/24 scope global mlx_0_0 00:11:51.802 valid_lft forever preferred_lft forever 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:51.802 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:51.802 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:11:51.802 altname enp218s0f1np1 00:11:51.802 altname ens818f1np1 00:11:51.802 inet 192.168.100.9/24 scope global mlx_0_1 00:11:51.802 valid_lft forever preferred_lft forever 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:51.802 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:51.803 192.168.100.9' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:51.803 192.168.100.9' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:51.803 192.168.100.9' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1392029 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1392029 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@823 -- # '[' -z 1392029 ']' 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@828 -- # local max_retries=100 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # xtrace_disable 00:11:51.803 23:37:40 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:51.803 [2024-07-15 23:37:40.330650] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:11:51.803 [2024-07-15 23:37:40.330700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.803 [2024-07-15 23:37:40.385466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.803 [2024-07-15 23:37:40.469299] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.803 [2024-07-15 23:37:40.469333] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.803 [2024-07-15 23:37:40.469340] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.803 [2024-07-15 23:37:40.469346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.803 [2024-07-15 23:37:40.469351] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.803 [2024-07-15 23:37:40.469393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.803 [2024-07-15 23:37:40.469480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.803 [2024-07-15 23:37:40.469568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.803 [2024-07-15 23:37:40.469570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # return 0 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.370 [2024-07-15 23:37:41.214329] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17efcc0/0x17f41b0) succeed. 00:11:52.370 [2024-07-15 23:37:41.223527] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17f1300/0x1835840) succeed. 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.370 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 Malloc0 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 Malloc1 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 [2024-07-15 23:37:41.421392] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:52.629 00:11:52.629 Discovery Log Number of Records 2, Generation counter 2 00:11:52.629 =====Discovery Log Entry 0====== 00:11:52.629 trtype: rdma 00:11:52.629 adrfam: ipv4 00:11:52.629 subtype: current discovery subsystem 00:11:52.629 treq: not required 00:11:52.629 portid: 0 00:11:52.629 trsvcid: 4420 00:11:52.629 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:52.629 traddr: 192.168.100.8 00:11:52.629 eflags: explicit discovery connections, duplicate discovery information 00:11:52.629 rdma_prtype: not specified 00:11:52.629 rdma_qptype: connected 00:11:52.629 rdma_cms: rdma-cm 00:11:52.629 rdma_pkey: 0x0000 00:11:52.629 =====Discovery Log Entry 1====== 00:11:52.629 trtype: rdma 00:11:52.629 adrfam: ipv4 00:11:52.629 subtype: nvme subsystem 00:11:52.629 treq: not required 00:11:52.629 portid: 0 00:11:52.629 trsvcid: 4420 00:11:52.629 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:52.629 traddr: 192.168.100.8 00:11:52.629 eflags: none 00:11:52.629 rdma_prtype: not specified 00:11:52.629 rdma_qptype: connected 00:11:52.629 rdma_cms: rdma-cm 00:11:52.629 rdma_pkey: 0x0000 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:52.629 23:37:41 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1192 -- # local i=0 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # [[ -n 2 ]] 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # nvme_device_counter=2 00:11:53.564 23:37:42 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # sleep 2 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_devices=2 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # return 0 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:56.091 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:11:56.092 /dev/nvme0n1 ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:11:56.092 23:37:44 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1213 -- # local i=0 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # return 0 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@553 -- # xtrace_disable 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:56.657 rmmod nvme_rdma 00:11:56.657 rmmod nvme_fabrics 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1392029 ']' 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1392029 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@942 -- # '[' -z 1392029 ']' 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # kill -0 1392029 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # uname 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:11:56.657 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1392029 00:11:56.916 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:11:56.916 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:11:56.916 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1392029' 00:11:56.916 killing process with pid 1392029 00:11:56.916 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@961 -- # kill 1392029 00:11:56.916 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # wait 1392029 00:11:57.175 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:57.175 23:37:45 nvmf_rdma.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:57.175 00:11:57.175 real 0m10.657s 00:11:57.175 user 0m23.051s 00:11:57.175 sys 0m4.267s 00:11:57.175 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@1118 -- # xtrace_disable 00:11:57.175 23:37:45 nvmf_rdma.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.175 ************************************ 00:11:57.175 END TEST nvmf_nvme_cli 00:11:57.175 ************************************ 00:11:57.175 23:37:45 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:11:57.175 23:37:45 nvmf_rdma -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:11:57.175 23:37:45 nvmf_rdma -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:57.175 23:37:45 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:11:57.175 23:37:45 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:11:57.175 23:37:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:57.175 ************************************ 00:11:57.175 START TEST nvmf_host_management 00:11:57.175 ************************************ 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:11:57.175 * Looking for test storage... 00:11:57.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.175 23:37:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.176 23:37:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.434 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.435 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.435 23:37:46 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.435 23:37:46 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.723 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:02.724 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:02.724 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:02.724 Found net devices under 0000:da:00.0: mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:02.724 Found net devices under 0000:da:00.1: mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:02.724 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.724 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:02.724 altname enp218s0f0np0 00:12:02.724 altname ens818f0np0 00:12:02.724 inet 192.168.100.8/24 scope global mlx_0_0 00:12:02.724 valid_lft forever preferred_lft forever 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:02.724 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:02.724 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:02.724 altname enp218s0f1np1 00:12:02.724 altname ens818f1np1 00:12:02.724 inet 192.168.100.9/24 scope global mlx_0_1 00:12:02.724 valid_lft forever preferred_lft forever 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:02.724 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:02.725 192.168.100.9' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:02.725 192.168.100.9' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:02.725 192.168.100.9' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1396052 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1396052 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1396052 ']' 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:02.725 23:37:51 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:02.725 [2024-07-15 23:37:51.472099] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:02.725 [2024-07-15 23:37:51.472143] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.725 [2024-07-15 23:37:51.528301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:02.725 [2024-07-15 23:37:51.603171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.725 [2024-07-15 23:37:51.603210] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.725 [2024-07-15 23:37:51.603217] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:02.725 [2024-07-15 23:37:51.603223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:02.725 [2024-07-15 23:37:51.603227] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.725 [2024-07-15 23:37:51.603336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.725 [2024-07-15 23:37:51.603403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:02.725 [2024-07-15 23:37:51.603492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:02.725 [2024-07-15 23:37:51.603493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 [2024-07-15 23:37:52.347262] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf56e10/0xf5b300) succeed. 00:12:03.662 [2024-07-15 23:37:52.356456] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf58400/0xf9c990) succeed. 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 Malloc0 00:12:03.662 [2024-07-15 23:37:52.529974] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1396325 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1396325 /var/tmp/bdevperf.sock 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@823 -- # '[' -z 1396325 ']' 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:03.662 { 00:12:03.662 "params": { 00:12:03.662 "name": "Nvme$subsystem", 00:12:03.662 "trtype": "$TEST_TRANSPORT", 00:12:03.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.662 "adrfam": "ipv4", 00:12:03.662 "trsvcid": "$NVMF_PORT", 00:12:03.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.662 "hdgst": ${hdgst:-false}, 00:12:03.662 "ddgst": ${ddgst:-false} 00:12:03.662 }, 00:12:03.662 "method": "bdev_nvme_attach_controller" 00:12:03.662 } 00:12:03.662 EOF 00:12:03.662 )") 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:03.662 23:37:52 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:03.662 "params": { 00:12:03.662 "name": "Nvme0", 00:12:03.662 "trtype": "rdma", 00:12:03.662 "traddr": "192.168.100.8", 00:12:03.662 "adrfam": "ipv4", 00:12:03.662 "trsvcid": "4420", 00:12:03.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:03.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:03.662 "hdgst": false, 00:12:03.662 "ddgst": false 00:12:03.662 }, 00:12:03.662 "method": "bdev_nvme_attach_controller" 00:12:03.662 }' 00:12:03.662 [2024-07-15 23:37:52.621699] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:03.662 [2024-07-15 23:37:52.621742] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396325 ] 00:12:03.921 [2024-07-15 23:37:52.678350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.921 [2024-07-15 23:37:52.752573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.179 Running I/O for 10 seconds... 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@856 -- # return 0 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:04.529 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1605 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1605 -ge 100 ']' 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@60 -- # break 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.795 [2024-07-15 23:37:53.518697] rdma.c: 864:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 8 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@553 -- # xtrace_disable 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:12:04.795 23:37:53 nvmf_rdma.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:12:05.729 [2024-07-15 23:37:54.530373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:12:05.729 [2024-07-15 23:37:54.530558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:12:05.729 [2024-07-15 23:37:54.530574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.729 [2024-07-15 23:37:54.530582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:12:05.729 [2024-07-15 23:37:54.530588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182700 00:12:05.730 [2024-07-15 23:37:54.530792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018edff00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eafd80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e9fd00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e7fc00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:96768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e5fb00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e3fa00 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:97152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e2f980 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.530988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.530994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182600 00:12:05.730 [2024-07-15 23:37:54.531008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182100 00:12:05.730 [2024-07-15 23:37:54.531022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a3afc0 len:0x10000 key:0x182100 00:12:05.730 [2024-07-15 23:37:54.531037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182100 00:12:05.730 [2024-07-15 23:37:54.531051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182100 00:12:05.730 [2024-07-15 23:37:54.531066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a0ae40 len:0x10000 key:0x182100 00:12:05.730 [2024-07-15 23:37:54.531080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182500 00:12:05.730 [2024-07-15 23:37:54.531094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:98304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:12:05.730 [2024-07-15 23:37:54.531108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:12:05.730 [2024-07-15 23:37:54.531122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x182400 00:12:05.730 [2024-07-15 23:37:54.531137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.730 [2024-07-15 23:37:54.531145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x182400 00:12:05.730 [2024-07-15 23:37:54.531151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2f0000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d311000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d05c000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d07d000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd40000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e11e000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0fd000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0dc000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.531345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x182400 00:12:05.731 [2024-07-15 23:37:54.531353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1fdc2000 sqhd:52b0 p:0 m:0 dnr:0 00:12:05.731 [2024-07-15 23:37:54.533361] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:12:05.731 [2024-07-15 23:37:54.534260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:12:05.731 task offset: 92416 on job bdev=Nvme0n1 fails 00:12:05.731 00:12:05.731 Latency(us) 00:12:05.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:05.731 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:05.731 Job: Nvme0n1 ended in about 1.60 seconds with error 00:12:05.731 Verification LBA range: start 0x0 length 0x400 00:12:05.731 Nvme0n1 : 1.60 1081.96 67.62 40.03 0.00 56523.39 1708.62 1018616.69 00:12:05.731 =================================================================================================================== 00:12:05.731 Total : 1081.96 67.62 40.03 0.00 56523.39 1708.62 1018616.69 00:12:05.731 [2024-07-15 23:37:54.535922] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1396325 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:05.731 { 00:12:05.731 "params": { 00:12:05.731 "name": "Nvme$subsystem", 00:12:05.731 "trtype": "$TEST_TRANSPORT", 00:12:05.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:05.731 "adrfam": "ipv4", 00:12:05.731 "trsvcid": "$NVMF_PORT", 00:12:05.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:05.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:05.731 "hdgst": ${hdgst:-false}, 00:12:05.731 "ddgst": ${ddgst:-false} 00:12:05.731 }, 00:12:05.731 "method": "bdev_nvme_attach_controller" 00:12:05.731 } 00:12:05.731 EOF 00:12:05.731 )") 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:12:05.731 23:37:54 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:05.731 "params": { 00:12:05.731 "name": "Nvme0", 00:12:05.731 "trtype": "rdma", 00:12:05.731 "traddr": "192.168.100.8", 00:12:05.731 "adrfam": "ipv4", 00:12:05.731 "trsvcid": "4420", 00:12:05.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:05.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:12:05.731 "hdgst": false, 00:12:05.731 "ddgst": false 00:12:05.731 }, 00:12:05.731 "method": "bdev_nvme_attach_controller" 00:12:05.731 }' 00:12:05.731 [2024-07-15 23:37:54.584448] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:05.731 [2024-07-15 23:37:54.584491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396577 ] 00:12:05.731 [2024-07-15 23:37:54.639875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.990 [2024-07-15 23:37:54.713971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.990 Running I/O for 1 seconds... 00:12:06.925 00:12:06.925 Latency(us) 00:12:06.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.925 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:12:06.925 Verification LBA range: start 0x0 length 0x400 00:12:06.925 Nvme0n1 : 1.01 3024.72 189.04 0.00 0.00 20724.22 655.36 42941.68 00:12:06.925 =================================================================================================================== 00:12:06.925 Total : 3024.72 189.04 0.00 0.00 20724.22 655.36 42941.68 00:12:07.184 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1396325 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:07.184 rmmod nvme_rdma 00:12:07.184 rmmod nvme_fabrics 00:12:07.184 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1396052 ']' 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1396052 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@942 -- # '[' -z 1396052 ']' 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@946 -- # kill -0 1396052 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@947 -- # uname 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1396052 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1396052' 00:12:07.443 killing process with pid 1396052 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@961 -- # kill 1396052 00:12:07.443 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@966 -- # wait 1396052 00:12:07.702 [2024-07-15 23:37:56.470362] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:12:07.702 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:07.702 23:37:56 nvmf_rdma.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:12:07.702 00:12:07.702 real 0m10.457s 00:12:07.702 user 0m24.503s 00:12:07.702 sys 0m4.746s 00:12:07.702 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:07.702 23:37:56 nvmf_rdma.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:12:07.702 ************************************ 00:12:07.702 END TEST nvmf_host_management 00:12:07.702 ************************************ 00:12:07.702 23:37:56 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:12:07.702 23:37:56 nvmf_rdma -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:07.702 23:37:56 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:07.702 23:37:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:07.702 23:37:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:07.702 ************************************ 00:12:07.702 START TEST nvmf_lvol 00:12:07.702 ************************************ 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:12:07.702 * Looking for test storage... 00:12:07.702 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:12:07.702 23:37:56 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:12.973 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:12.973 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:12.973 Found net devices under 0000:da:00.0: mlx_0_0 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:12.973 Found net devices under 0000:da:00.1: mlx_0_1 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:12.973 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:12.974 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.974 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:12.974 altname enp218s0f0np0 00:12:12.974 altname ens818f0np0 00:12:12.974 inet 192.168.100.8/24 scope global mlx_0_0 00:12:12.974 valid_lft forever preferred_lft forever 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:12.974 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:12.974 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:12.974 altname enp218s0f1np1 00:12:12.974 altname ens818f1np1 00:12:12.974 inet 192.168.100.9/24 scope global mlx_0_1 00:12:12.974 valid_lft forever preferred_lft forever 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:12.974 192.168.100.9' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:12.974 192.168.100.9' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:12.974 192.168.100.9' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1399876 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1399876 00:12:12.974 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@823 -- # '[' -z 1399876 ']' 00:12:12.975 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.975 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:12.975 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.975 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:12.975 23:38:01 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:12.975 [2024-07-15 23:38:01.676146] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:12.975 [2024-07-15 23:38:01.676191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.975 [2024-07-15 23:38:01.732991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:12.975 [2024-07-15 23:38:01.808884] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.975 [2024-07-15 23:38:01.808926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.975 [2024-07-15 23:38:01.808933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.975 [2024-07-15 23:38:01.808938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.975 [2024-07-15 23:38:01.808943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.975 [2024-07-15 23:38:01.809003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:12.975 [2024-07-15 23:38:01.809101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:12.975 [2024-07-15 23:38:01.809103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@856 -- # return 0 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.542 23:38:02 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:13.800 [2024-07-15 23:38:02.677536] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x956f00/0x95b3f0) succeed. 00:12:13.800 [2024-07-15 23:38:02.686454] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9584a0/0x99ca80) succeed. 00:12:14.059 23:38:02 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.059 23:38:02 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:14.059 23:38:02 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:14.317 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:14.317 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:14.575 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:14.575 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9b362e46-28d2-471d-b575-9b2eda883033 00:12:14.575 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9b362e46-28d2-471d-b575-9b2eda883033 lvol 20 00:12:14.834 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=82fcbf68-8350-475a-9ccc-5ff1f94096d6 00:12:14.834 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:15.092 23:38:03 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82fcbf68-8350-475a-9ccc-5ff1f94096d6 00:12:15.092 23:38:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:15.350 [2024-07-15 23:38:04.214474] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:15.350 23:38:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:15.609 23:38:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1400403 00:12:15.609 23:38:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:15.609 23:38:04 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:16.542 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 82fcbf68-8350-475a-9ccc-5ff1f94096d6 MY_SNAPSHOT 00:12:16.798 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=eb53a96b-44d0-4860-9cf2-1963776f2995 00:12:16.798 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 82fcbf68-8350-475a-9ccc-5ff1f94096d6 30 00:12:17.054 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone eb53a96b-44d0-4860-9cf2-1963776f2995 MY_CLONE 00:12:17.054 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=06cc0799-7e73-4b8e-9293-7e120e351acf 00:12:17.054 23:38:05 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 06cc0799-7e73-4b8e-9293-7e120e351acf 00:12:17.310 23:38:06 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1400403 00:12:27.315 Initializing NVMe Controllers 00:12:27.315 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:12:27.315 Controller IO queue size 128, less than required. 00:12:27.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.315 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:27.315 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:27.315 Initialization complete. Launching workers. 00:12:27.315 ======================================================== 00:12:27.315 Latency(us) 00:12:27.315 Device Information : IOPS MiB/s Average min max 00:12:27.315 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16946.30 66.20 7555.46 2028.78 45007.33 00:12:27.315 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16975.10 66.31 7542.25 3748.88 48414.08 00:12:27.315 ======================================================== 00:12:27.315 Total : 33921.40 132.51 7548.85 2028.78 48414.08 00:12:27.315 00:12:27.315 23:38:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:27.315 23:38:15 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82fcbf68-8350-475a-9ccc-5ff1f94096d6 00:12:27.315 23:38:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b362e46-28d2-471d-b575-9b2eda883033 00:12:27.315 23:38:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:27.315 23:38:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:27.315 23:38:16 nvmf_rdma.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.316 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:27.316 rmmod nvme_rdma 00:12:27.316 rmmod nvme_fabrics 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1399876 ']' 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1399876 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@942 -- # '[' -z 1399876 ']' 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@946 -- # kill -0 1399876 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@947 -- # uname 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1399876 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1399876' 00:12:27.575 killing process with pid 1399876 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@961 -- # kill 1399876 00:12:27.575 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@966 -- # wait 1399876 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:27.834 00:12:27.834 real 0m20.085s 00:12:27.834 user 1m10.282s 00:12:27.834 sys 0m4.757s 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:27.834 ************************************ 00:12:27.834 END TEST nvmf_lvol 00:12:27.834 ************************************ 00:12:27.834 23:38:16 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:12:27.834 23:38:16 nvmf_rdma -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:27.834 23:38:16 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:27.834 23:38:16 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:27.834 23:38:16 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:12:27.834 ************************************ 00:12:27.834 START TEST nvmf_lvs_grow 00:12:27.834 ************************************ 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:12:27.834 * Looking for test storage... 00:12:27.834 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.834 23:38:16 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.835 23:38:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.093 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.093 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.093 23:38:16 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.093 23:38:16 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:33.366 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:33.366 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:33.366 Found net devices under 0000:da:00.0: mlx_0_0 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:33.366 Found net devices under 0000:da:00.1: mlx_0_1 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:33.366 23:38:21 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:33.366 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:33.366 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.367 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:12:33.367 altname enp218s0f0np0 00:12:33.367 altname ens818f0np0 00:12:33.367 inet 192.168.100.8/24 scope global mlx_0_0 00:12:33.367 valid_lft forever preferred_lft forever 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:33.367 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:33.367 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:12:33.367 altname enp218s0f1np1 00:12:33.367 altname ens818f1np1 00:12:33.367 inet 192.168.100.9/24 scope global mlx_0_1 00:12:33.367 valid_lft forever preferred_lft forever 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:33.367 192.168.100.9' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:33.367 192.168.100.9' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:33.367 192.168.100.9' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1405495 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1405495 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # '[' -z 1405495 ']' 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:33.367 23:38:22 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 [2024-07-15 23:38:22.237589] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:33.367 [2024-07-15 23:38:22.237640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.367 [2024-07-15 23:38:22.296883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:33.625 [2024-07-15 23:38:22.374763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.625 [2024-07-15 23:38:22.374799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.625 [2024-07-15 23:38:22.374805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.625 [2024-07-15 23:38:22.374811] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.625 [2024-07-15 23:38:22.374816] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.625 [2024-07-15 23:38:22.374833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # return 0 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.192 23:38:23 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:34.450 [2024-07-15 23:38:23.237218] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x84f910/0x853e00) succeed. 00:12:34.450 [2024-07-15 23:38:23.246099] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x850e10/0x895490) succeed. 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:34.450 ************************************ 00:12:34.450 START TEST lvs_grow_clean 00:12:34.450 ************************************ 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1117 -- # lvs_grow 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:34.450 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:34.707 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:34.707 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:34.965 23:38:23 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 lvol 150 00:12:35.222 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=25783694-2061-47e7-ab1e-c745471d625b 00:12:35.222 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:35.222 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:35.222 [2024-07-15 23:38:24.187896] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:35.222 [2024-07-15 23:38:24.187946] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:35.222 true 00:12:35.222 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:35.478 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:35.478 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:35.478 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:35.736 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 25783694-2061-47e7-ab1e-c745471d625b 00:12:35.736 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:35.994 [2024-07-15 23:38:24.854106] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:35.994 23:38:24 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1405994 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1405994 /var/tmp/bdevperf.sock 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@823 -- # '[' -z 1405994 ']' 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:36.253 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:36.253 [2024-07-15 23:38:25.062181] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:36.253 [2024-07-15 23:38:25.062226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1405994 ] 00:12:36.253 [2024-07-15 23:38:25.115560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.253 [2024-07-15 23:38:25.190095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.187 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:37.187 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # return 0 00:12:37.187 23:38:25 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:37.187 Nvme0n1 00:12:37.187 23:38:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:37.445 [ 00:12:37.445 { 00:12:37.445 "name": "Nvme0n1", 00:12:37.446 "aliases": [ 00:12:37.446 "25783694-2061-47e7-ab1e-c745471d625b" 00:12:37.446 ], 00:12:37.446 "product_name": "NVMe disk", 00:12:37.446 "block_size": 4096, 00:12:37.446 "num_blocks": 38912, 00:12:37.446 "uuid": "25783694-2061-47e7-ab1e-c745471d625b", 00:12:37.446 "assigned_rate_limits": { 00:12:37.446 "rw_ios_per_sec": 0, 00:12:37.446 "rw_mbytes_per_sec": 0, 00:12:37.446 "r_mbytes_per_sec": 0, 00:12:37.446 "w_mbytes_per_sec": 0 00:12:37.446 }, 00:12:37.446 "claimed": false, 00:12:37.446 "zoned": false, 00:12:37.446 "supported_io_types": { 00:12:37.446 "read": true, 00:12:37.446 "write": true, 00:12:37.446 "unmap": true, 00:12:37.446 "flush": true, 00:12:37.446 "reset": true, 00:12:37.446 "nvme_admin": true, 00:12:37.446 "nvme_io": true, 00:12:37.446 "nvme_io_md": false, 00:12:37.446 "write_zeroes": true, 00:12:37.446 "zcopy": false, 00:12:37.446 "get_zone_info": false, 00:12:37.446 "zone_management": false, 00:12:37.446 "zone_append": false, 00:12:37.446 "compare": true, 00:12:37.446 "compare_and_write": true, 00:12:37.446 "abort": true, 00:12:37.446 "seek_hole": false, 00:12:37.446 "seek_data": false, 00:12:37.446 "copy": true, 00:12:37.446 "nvme_iov_md": false 00:12:37.446 }, 00:12:37.446 "memory_domains": [ 00:12:37.446 { 00:12:37.446 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:37.446 "dma_device_type": 0 00:12:37.446 } 00:12:37.446 ], 00:12:37.446 "driver_specific": { 00:12:37.446 "nvme": [ 00:12:37.446 { 00:12:37.446 "trid": { 00:12:37.446 "trtype": "RDMA", 00:12:37.446 "adrfam": "IPv4", 00:12:37.446 "traddr": "192.168.100.8", 00:12:37.446 "trsvcid": "4420", 00:12:37.446 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:37.446 }, 00:12:37.446 "ctrlr_data": { 00:12:37.446 "cntlid": 1, 00:12:37.446 "vendor_id": "0x8086", 00:12:37.446 "model_number": "SPDK bdev Controller", 00:12:37.446 "serial_number": "SPDK0", 00:12:37.446 "firmware_revision": "24.09", 00:12:37.446 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:37.446 "oacs": { 00:12:37.446 "security": 0, 00:12:37.446 "format": 0, 00:12:37.446 "firmware": 0, 00:12:37.446 "ns_manage": 0 00:12:37.446 }, 00:12:37.446 "multi_ctrlr": true, 00:12:37.446 "ana_reporting": false 00:12:37.446 }, 00:12:37.446 "vs": { 00:12:37.446 "nvme_version": "1.3" 00:12:37.446 }, 00:12:37.446 "ns_data": { 00:12:37.446 "id": 1, 00:12:37.446 "can_share": true 00:12:37.446 } 00:12:37.446 } 00:12:37.446 ], 00:12:37.446 "mp_policy": "active_passive" 00:12:37.446 } 00:12:37.446 } 00:12:37.446 ] 00:12:37.446 23:38:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1406232 00:12:37.446 23:38:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:37.446 23:38:26 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:37.446 Running I/O for 10 seconds... 00:12:38.381 Latency(us) 00:12:38.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:38.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.381 Nvme0n1 : 1.00 34501.00 134.77 0.00 0.00 0.00 0.00 0.00 00:12:38.381 =================================================================================================================== 00:12:38.381 Total : 34501.00 134.77 0.00 0.00 0.00 0.00 0.00 00:12:38.381 00:12:39.317 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:39.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.576 Nvme0n1 : 2.00 34915.00 136.39 0.00 0.00 0.00 0.00 0.00 00:12:39.576 =================================================================================================================== 00:12:39.576 Total : 34915.00 136.39 0.00 0.00 0.00 0.00 0.00 00:12:39.576 00:12:39.576 true 00:12:39.576 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:39.576 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:39.835 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:39.835 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:39.835 23:38:28 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1406232 00:12:40.403 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.403 Nvme0n1 : 3.00 35061.00 136.96 0.00 0.00 0.00 0.00 0.00 00:12:40.403 =================================================================================================================== 00:12:40.403 Total : 35061.00 136.96 0.00 0.00 0.00 0.00 0.00 00:12:40.403 00:12:41.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:41.781 Nvme0n1 : 4.00 35073.75 137.01 0.00 0.00 0.00 0.00 0.00 00:12:41.781 =================================================================================================================== 00:12:41.781 Total : 35073.75 137.01 0.00 0.00 0.00 0.00 0.00 00:12:41.781 00:12:42.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:42.717 Nvme0n1 : 5.00 35162.40 137.35 0.00 0.00 0.00 0.00 0.00 00:12:42.717 =================================================================================================================== 00:12:42.717 Total : 35162.40 137.35 0.00 0.00 0.00 0.00 0.00 00:12:42.717 00:12:43.655 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:43.655 Nvme0n1 : 6.00 35247.67 137.69 0.00 0.00 0.00 0.00 0.00 00:12:43.655 =================================================================================================================== 00:12:43.655 Total : 35247.67 137.69 0.00 0.00 0.00 0.00 0.00 00:12:43.655 00:12:44.590 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:44.590 Nvme0n1 : 7.00 35305.00 137.91 0.00 0.00 0.00 0.00 0.00 00:12:44.590 =================================================================================================================== 00:12:44.590 Total : 35305.00 137.91 0.00 0.00 0.00 0.00 0.00 00:12:44.590 00:12:45.526 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:45.526 Nvme0n1 : 8.00 35347.88 138.08 0.00 0.00 0.00 0.00 0.00 00:12:45.526 =================================================================================================================== 00:12:45.526 Total : 35347.88 138.08 0.00 0.00 0.00 0.00 0.00 00:12:45.526 00:12:46.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:46.462 Nvme0n1 : 9.00 35382.11 138.21 0.00 0.00 0.00 0.00 0.00 00:12:46.462 =================================================================================================================== 00:12:46.462 Total : 35382.11 138.21 0.00 0.00 0.00 0.00 0.00 00:12:46.462 00:12:47.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.469 Nvme0n1 : 10.00 35407.50 138.31 0.00 0.00 0.00 0.00 0.00 00:12:47.469 =================================================================================================================== 00:12:47.469 Total : 35407.50 138.31 0.00 0.00 0.00 0.00 0.00 00:12:47.469 00:12:47.469 00:12:47.469 Latency(us) 00:12:47.469 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.469 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.469 Nvme0n1 : 10.00 35407.13 138.31 0.00 0.00 3612.06 2215.74 16227.96 00:12:47.469 =================================================================================================================== 00:12:47.469 Total : 35407.13 138.31 0.00 0.00 3612.06 2215.74 16227.96 00:12:47.469 0 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1405994 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@942 -- # '[' -z 1405994 ']' 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # kill -0 1405994 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # uname 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:12:47.469 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1405994 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1405994' 00:12:47.733 killing process with pid 1405994 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@961 -- # kill 1405994 00:12:47.733 Received shutdown signal, test time was about 10.000000 seconds 00:12:47.733 00:12:47.733 Latency(us) 00:12:47.733 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.733 =================================================================================================================== 00:12:47.733 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # wait 1405994 00:12:47.733 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:47.992 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:48.250 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:48.250 23:38:36 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:48.250 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:48.250 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:48.250 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:48.509 [2024-07-15 23:38:37.297373] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # local es=0 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:12:48.509 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:48.509 request: 00:12:48.509 { 00:12:48.509 "uuid": "b5e6cebf-a31b-43be-af90-2ab5e9d91ee2", 00:12:48.509 "method": "bdev_lvol_get_lvstores", 00:12:48.509 "req_id": 1 00:12:48.509 } 00:12:48.509 Got JSON-RPC error response 00:12:48.509 response: 00:12:48.509 { 00:12:48.509 "code": -19, 00:12:48.509 "message": "No such device" 00:12:48.509 } 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@645 -- # es=1 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:48.767 aio_bdev 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 25783694-2061-47e7-ab1e-c745471d625b 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@891 -- # local bdev_name=25783694-2061-47e7-ab1e-c745471d625b 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@893 -- # local i 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:12:48.767 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:49.026 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 25783694-2061-47e7-ab1e-c745471d625b -t 2000 00:12:49.026 [ 00:12:49.026 { 00:12:49.026 "name": "25783694-2061-47e7-ab1e-c745471d625b", 00:12:49.026 "aliases": [ 00:12:49.026 "lvs/lvol" 00:12:49.026 ], 00:12:49.026 "product_name": "Logical Volume", 00:12:49.026 "block_size": 4096, 00:12:49.026 "num_blocks": 38912, 00:12:49.026 "uuid": "25783694-2061-47e7-ab1e-c745471d625b", 00:12:49.026 "assigned_rate_limits": { 00:12:49.026 "rw_ios_per_sec": 0, 00:12:49.026 "rw_mbytes_per_sec": 0, 00:12:49.026 "r_mbytes_per_sec": 0, 00:12:49.026 "w_mbytes_per_sec": 0 00:12:49.026 }, 00:12:49.026 "claimed": false, 00:12:49.026 "zoned": false, 00:12:49.026 "supported_io_types": { 00:12:49.026 "read": true, 00:12:49.026 "write": true, 00:12:49.026 "unmap": true, 00:12:49.026 "flush": false, 00:12:49.026 "reset": true, 00:12:49.026 "nvme_admin": false, 00:12:49.026 "nvme_io": false, 00:12:49.026 "nvme_io_md": false, 00:12:49.026 "write_zeroes": true, 00:12:49.026 "zcopy": false, 00:12:49.026 "get_zone_info": false, 00:12:49.026 "zone_management": false, 00:12:49.026 "zone_append": false, 00:12:49.026 "compare": false, 00:12:49.026 "compare_and_write": false, 00:12:49.026 "abort": false, 00:12:49.026 "seek_hole": true, 00:12:49.026 "seek_data": true, 00:12:49.026 "copy": false, 00:12:49.026 "nvme_iov_md": false 00:12:49.026 }, 00:12:49.026 "driver_specific": { 00:12:49.026 "lvol": { 00:12:49.026 "lvol_store_uuid": "b5e6cebf-a31b-43be-af90-2ab5e9d91ee2", 00:12:49.026 "base_bdev": "aio_bdev", 00:12:49.026 "thin_provision": false, 00:12:49.026 "num_allocated_clusters": 38, 00:12:49.026 "snapshot": false, 00:12:49.026 "clone": false, 00:12:49.026 "esnap_clone": false 00:12:49.026 } 00:12:49.026 } 00:12:49.026 } 00:12:49.026 ] 00:12:49.026 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # return 0 00:12:49.026 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:49.026 23:38:37 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:49.285 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:49.285 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:49.285 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:49.543 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:49.543 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 25783694-2061-47e7-ab1e-c745471d625b 00:12:49.543 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5e6cebf-a31b-43be-af90-2ab5e9d91ee2 00:12:49.802 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.060 00:12:50.060 real 0m15.530s 00:12:50.060 user 0m15.608s 00:12:50.060 sys 0m0.972s 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1118 -- # xtrace_disable 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:50.060 ************************************ 00:12:50.060 END TEST lvs_grow_clean 00:12:50.060 ************************************ 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # xtrace_disable 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:50.060 ************************************ 00:12:50.060 START TEST lvs_grow_dirty 00:12:50.060 ************************************ 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1117 -- # lvs_grow dirty 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:50.060 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.061 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.061 23:38:38 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:50.319 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:50.319 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1005c1f6-eb91-422f-b840-8b1da6968034 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:50.577 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1005c1f6-eb91-422f-b840-8b1da6968034 lvol 150 00:12:50.835 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:12:50.835 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:50.835 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:50.835 [2024-07-15 23:38:39.805332] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:50.835 [2024-07-15 23:38:39.805389] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:50.835 true 00:12:51.093 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:12:51.093 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:51.093 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:51.093 23:38:39 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:51.351 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:12:51.610 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:12:51.611 [2024-07-15 23:38:40.491582] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:51.611 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1408688 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1408688 /var/tmp/bdevperf.sock 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1408688 ']' 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:12:51.869 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:51.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:51.870 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:12:51.870 23:38:40 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:51.870 [2024-07-15 23:38:40.704743] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:12:51.870 [2024-07-15 23:38:40.704788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1408688 ] 00:12:51.870 [2024-07-15 23:38:40.757239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.870 [2024-07-15 23:38:40.836003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.804 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:12:52.804 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:12:52.804 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:52.804 Nvme0n1 00:12:52.804 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:53.063 [ 00:12:53.063 { 00:12:53.063 "name": "Nvme0n1", 00:12:53.063 "aliases": [ 00:12:53.063 "34b85490-84fc-467f-a2d8-1c6ce1c7308c" 00:12:53.063 ], 00:12:53.063 "product_name": "NVMe disk", 00:12:53.063 "block_size": 4096, 00:12:53.063 "num_blocks": 38912, 00:12:53.063 "uuid": "34b85490-84fc-467f-a2d8-1c6ce1c7308c", 00:12:53.063 "assigned_rate_limits": { 00:12:53.063 "rw_ios_per_sec": 0, 00:12:53.063 "rw_mbytes_per_sec": 0, 00:12:53.063 "r_mbytes_per_sec": 0, 00:12:53.063 "w_mbytes_per_sec": 0 00:12:53.063 }, 00:12:53.063 "claimed": false, 00:12:53.063 "zoned": false, 00:12:53.063 "supported_io_types": { 00:12:53.063 "read": true, 00:12:53.063 "write": true, 00:12:53.063 "unmap": true, 00:12:53.063 "flush": true, 00:12:53.063 "reset": true, 00:12:53.063 "nvme_admin": true, 00:12:53.063 "nvme_io": true, 00:12:53.063 "nvme_io_md": false, 00:12:53.063 "write_zeroes": true, 00:12:53.063 "zcopy": false, 00:12:53.063 "get_zone_info": false, 00:12:53.063 "zone_management": false, 00:12:53.063 "zone_append": false, 00:12:53.063 "compare": true, 00:12:53.063 "compare_and_write": true, 00:12:53.063 "abort": true, 00:12:53.063 "seek_hole": false, 00:12:53.063 "seek_data": false, 00:12:53.063 "copy": true, 00:12:53.063 "nvme_iov_md": false 00:12:53.063 }, 00:12:53.063 "memory_domains": [ 00:12:53.063 { 00:12:53.063 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:12:53.063 "dma_device_type": 0 00:12:53.063 } 00:12:53.063 ], 00:12:53.063 "driver_specific": { 00:12:53.063 "nvme": [ 00:12:53.063 { 00:12:53.063 "trid": { 00:12:53.063 "trtype": "RDMA", 00:12:53.063 "adrfam": "IPv4", 00:12:53.063 "traddr": "192.168.100.8", 00:12:53.063 "trsvcid": "4420", 00:12:53.063 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:53.063 }, 00:12:53.063 "ctrlr_data": { 00:12:53.063 "cntlid": 1, 00:12:53.063 "vendor_id": "0x8086", 00:12:53.063 "model_number": "SPDK bdev Controller", 00:12:53.063 "serial_number": "SPDK0", 00:12:53.063 "firmware_revision": "24.09", 00:12:53.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:53.063 "oacs": { 00:12:53.063 "security": 0, 00:12:53.063 "format": 0, 00:12:53.063 "firmware": 0, 00:12:53.063 "ns_manage": 0 00:12:53.063 }, 00:12:53.063 "multi_ctrlr": true, 00:12:53.063 "ana_reporting": false 00:12:53.063 }, 00:12:53.063 "vs": { 00:12:53.063 "nvme_version": "1.3" 00:12:53.063 }, 00:12:53.063 "ns_data": { 00:12:53.063 "id": 1, 00:12:53.063 "can_share": true 00:12:53.063 } 00:12:53.063 } 00:12:53.063 ], 00:12:53.063 "mp_policy": "active_passive" 00:12:53.063 } 00:12:53.063 } 00:12:53.063 ] 00:12:53.063 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1408858 00:12:53.063 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:53.063 23:38:41 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:53.063 Running I/O for 10 seconds... 00:12:54.455 Latency(us) 00:12:54.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:54.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.455 Nvme0n1 : 1.00 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:12:54.455 =================================================================================================================== 00:12:54.455 Total : 34496.00 134.75 0.00 0.00 0.00 0.00 0.00 00:12:54.455 00:12:55.020 23:38:43 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:12:55.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.279 Nvme0n1 : 2.00 34832.00 136.06 0.00 0.00 0.00 0.00 0.00 00:12:55.279 =================================================================================================================== 00:12:55.279 Total : 34832.00 136.06 0.00 0.00 0.00 0.00 0.00 00:12:55.279 00:12:55.279 true 00:12:55.279 23:38:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:12:55.279 23:38:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:55.538 23:38:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:55.538 23:38:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:55.538 23:38:44 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1408858 00:12:56.103 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.103 Nvme0n1 : 3.00 34922.67 136.42 0.00 0.00 0.00 0.00 0.00 00:12:56.103 =================================================================================================================== 00:12:56.103 Total : 34922.67 136.42 0.00 0.00 0.00 0.00 0.00 00:12:56.103 00:12:57.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:57.477 Nvme0n1 : 4.00 35049.75 136.91 0.00 0.00 0.00 0.00 0.00 00:12:57.477 =================================================================================================================== 00:12:57.477 Total : 35049.75 136.91 0.00 0.00 0.00 0.00 0.00 00:12:57.477 00:12:58.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:58.411 Nvme0n1 : 5.00 35137.60 137.26 0.00 0.00 0.00 0.00 0.00 00:12:58.411 =================================================================================================================== 00:12:58.411 Total : 35137.60 137.26 0.00 0.00 0.00 0.00 0.00 00:12:58.411 00:12:59.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:59.346 Nvme0n1 : 6.00 35204.50 137.52 0.00 0.00 0.00 0.00 0.00 00:12:59.346 =================================================================================================================== 00:12:59.346 Total : 35204.50 137.52 0.00 0.00 0.00 0.00 0.00 00:12:59.346 00:13:00.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:00.286 Nvme0n1 : 7.00 35254.00 137.71 0.00 0.00 0.00 0.00 0.00 00:13:00.286 =================================================================================================================== 00:13:00.286 Total : 35254.00 137.71 0.00 0.00 0.00 0.00 0.00 00:13:00.286 00:13:01.223 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:01.223 Nvme0n1 : 8.00 35287.12 137.84 0.00 0.00 0.00 0.00 0.00 00:13:01.223 =================================================================================================================== 00:13:01.223 Total : 35287.12 137.84 0.00 0.00 0.00 0.00 0.00 00:13:01.223 00:13:02.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:02.160 Nvme0n1 : 9.00 35257.78 137.73 0.00 0.00 0.00 0.00 0.00 00:13:02.160 =================================================================================================================== 00:13:02.160 Total : 35257.78 137.73 0.00 0.00 0.00 0.00 0.00 00:13:02.160 00:13:03.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.096 Nvme0n1 : 10.00 35260.10 137.73 0.00 0.00 0.00 0.00 0.00 00:13:03.096 =================================================================================================================== 00:13:03.096 Total : 35260.10 137.73 0.00 0.00 0.00 0.00 0.00 00:13:03.096 00:13:03.096 00:13:03.096 Latency(us) 00:13:03.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.096 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:13:03.096 Nvme0n1 : 10.00 35260.96 137.74 0.00 0.00 3627.06 2730.67 15978.30 00:13:03.096 =================================================================================================================== 00:13:03.096 Total : 35260.96 137.74 0.00 0.00 3627.06 2730.67 15978.30 00:13:03.096 0 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1408688 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@942 -- # '[' -z 1408688 ']' 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # kill -0 1408688 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # uname 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:03.096 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1408688 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1408688' 00:13:03.355 killing process with pid 1408688 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@961 -- # kill 1408688 00:13:03.355 Received shutdown signal, test time was about 10.000000 seconds 00:13:03.355 00:13:03.355 Latency(us) 00:13:03.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.355 =================================================================================================================== 00:13:03.355 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # wait 1408688 00:13:03.355 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:13:03.615 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1405495 00:13:03.874 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1405495 00:13:04.134 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1405495 Killed "${NVMF_APP[@]}" "$@" 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1410678 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1410678 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@823 -- # '[' -z 1410678 ']' 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:04.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:04.134 23:38:52 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:04.134 [2024-07-15 23:38:52.920126] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:04.134 [2024-07-15 23:38:52.920170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:04.134 [2024-07-15 23:38:52.975967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.134 [2024-07-15 23:38:53.056466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:04.134 [2024-07-15 23:38:53.056498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:04.134 [2024-07-15 23:38:53.056505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:04.134 [2024-07-15 23:38:53.056511] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:04.134 [2024-07-15 23:38:53.056516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:04.134 [2024-07-15 23:38:53.056532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # return 0 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:05.072 [2024-07-15 23:38:53.905110] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:13:05.072 [2024-07-15 23:38:53.905185] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:13:05.072 [2024-07-15 23:38:53.905208] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:05.072 23:38:53 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:05.331 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34b85490-84fc-467f-a2d8-1c6ce1c7308c -t 2000 00:13:05.331 [ 00:13:05.331 { 00:13:05.331 "name": "34b85490-84fc-467f-a2d8-1c6ce1c7308c", 00:13:05.331 "aliases": [ 00:13:05.331 "lvs/lvol" 00:13:05.331 ], 00:13:05.331 "product_name": "Logical Volume", 00:13:05.331 "block_size": 4096, 00:13:05.331 "num_blocks": 38912, 00:13:05.331 "uuid": "34b85490-84fc-467f-a2d8-1c6ce1c7308c", 00:13:05.331 "assigned_rate_limits": { 00:13:05.331 "rw_ios_per_sec": 0, 00:13:05.331 "rw_mbytes_per_sec": 0, 00:13:05.331 "r_mbytes_per_sec": 0, 00:13:05.331 "w_mbytes_per_sec": 0 00:13:05.331 }, 00:13:05.331 "claimed": false, 00:13:05.331 "zoned": false, 00:13:05.331 "supported_io_types": { 00:13:05.331 "read": true, 00:13:05.331 "write": true, 00:13:05.331 "unmap": true, 00:13:05.331 "flush": false, 00:13:05.331 "reset": true, 00:13:05.331 "nvme_admin": false, 00:13:05.331 "nvme_io": false, 00:13:05.331 "nvme_io_md": false, 00:13:05.331 "write_zeroes": true, 00:13:05.331 "zcopy": false, 00:13:05.331 "get_zone_info": false, 00:13:05.331 "zone_management": false, 00:13:05.331 "zone_append": false, 00:13:05.331 "compare": false, 00:13:05.331 "compare_and_write": false, 00:13:05.331 "abort": false, 00:13:05.331 "seek_hole": true, 00:13:05.331 "seek_data": true, 00:13:05.331 "copy": false, 00:13:05.331 "nvme_iov_md": false 00:13:05.331 }, 00:13:05.331 "driver_specific": { 00:13:05.331 "lvol": { 00:13:05.331 "lvol_store_uuid": "1005c1f6-eb91-422f-b840-8b1da6968034", 00:13:05.331 "base_bdev": "aio_bdev", 00:13:05.331 "thin_provision": false, 00:13:05.331 "num_allocated_clusters": 38, 00:13:05.331 "snapshot": false, 00:13:05.331 "clone": false, 00:13:05.331 "esnap_clone": false 00:13:05.331 } 00:13:05.331 } 00:13:05.331 } 00:13:05.331 ] 00:13:05.331 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:13:05.331 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:05.331 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:13:05.591 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:13:05.591 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:05.591 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:05.850 [2024-07-15 23:38:54.761787] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # local es=0 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:13:05.850 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:06.109 request: 00:13:06.109 { 00:13:06.109 "uuid": "1005c1f6-eb91-422f-b840-8b1da6968034", 00:13:06.109 "method": "bdev_lvol_get_lvstores", 00:13:06.109 "req_id": 1 00:13:06.109 } 00:13:06.109 Got JSON-RPC error response 00:13:06.109 response: 00:13:06.109 { 00:13:06.109 "code": -19, 00:13:06.109 "message": "No such device" 00:13:06.109 } 00:13:06.109 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@645 -- # es=1 00:13:06.109 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:13:06.109 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:13:06.109 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:13:06.109 23:38:54 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:13:06.368 aio_bdev 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@891 -- # local bdev_name=34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@892 -- # local bdev_timeout= 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@893 -- # local i 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # [[ -z '' ]] 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@894 -- # bdev_timeout=2000 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:13:06.368 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 34b85490-84fc-467f-a2d8-1c6ce1c7308c -t 2000 00:13:06.627 [ 00:13:06.627 { 00:13:06.627 "name": "34b85490-84fc-467f-a2d8-1c6ce1c7308c", 00:13:06.627 "aliases": [ 00:13:06.627 "lvs/lvol" 00:13:06.627 ], 00:13:06.627 "product_name": "Logical Volume", 00:13:06.627 "block_size": 4096, 00:13:06.627 "num_blocks": 38912, 00:13:06.627 "uuid": "34b85490-84fc-467f-a2d8-1c6ce1c7308c", 00:13:06.627 "assigned_rate_limits": { 00:13:06.627 "rw_ios_per_sec": 0, 00:13:06.627 "rw_mbytes_per_sec": 0, 00:13:06.627 "r_mbytes_per_sec": 0, 00:13:06.627 "w_mbytes_per_sec": 0 00:13:06.627 }, 00:13:06.627 "claimed": false, 00:13:06.627 "zoned": false, 00:13:06.627 "supported_io_types": { 00:13:06.627 "read": true, 00:13:06.627 "write": true, 00:13:06.627 "unmap": true, 00:13:06.627 "flush": false, 00:13:06.627 "reset": true, 00:13:06.627 "nvme_admin": false, 00:13:06.627 "nvme_io": false, 00:13:06.627 "nvme_io_md": false, 00:13:06.627 "write_zeroes": true, 00:13:06.627 "zcopy": false, 00:13:06.627 "get_zone_info": false, 00:13:06.627 "zone_management": false, 00:13:06.627 "zone_append": false, 00:13:06.627 "compare": false, 00:13:06.627 "compare_and_write": false, 00:13:06.627 "abort": false, 00:13:06.627 "seek_hole": true, 00:13:06.627 "seek_data": true, 00:13:06.627 "copy": false, 00:13:06.627 "nvme_iov_md": false 00:13:06.627 }, 00:13:06.627 "driver_specific": { 00:13:06.627 "lvol": { 00:13:06.627 "lvol_store_uuid": "1005c1f6-eb91-422f-b840-8b1da6968034", 00:13:06.627 "base_bdev": "aio_bdev", 00:13:06.627 "thin_provision": false, 00:13:06.627 "num_allocated_clusters": 38, 00:13:06.627 "snapshot": false, 00:13:06.627 "clone": false, 00:13:06.627 "esnap_clone": false 00:13:06.628 } 00:13:06.628 } 00:13:06.628 } 00:13:06.628 ] 00:13:06.628 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # return 0 00:13:06.628 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:06.628 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:13:06.886 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:13:06.886 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:06.886 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:13:06.886 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:13:06.886 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 34b85490-84fc-467f-a2d8-1c6ce1c7308c 00:13:07.145 23:38:55 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1005c1f6-eb91-422f-b840-8b1da6968034 00:13:07.404 23:38:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:07.404 23:38:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:07.404 00:13:07.404 real 0m17.420s 00:13:07.404 user 0m45.487s 00:13:07.404 sys 0m2.909s 00:13:07.404 23:38:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:07.404 23:38:56 nvmf_rdma.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:07.404 ************************************ 00:13:07.404 END TEST lvs_grow_dirty 00:13:07.404 ************************************ 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1136 -- # return 0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@800 -- # type=--id 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@801 -- # id=0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # for n in $shm_files 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:07.664 nvmf_trace.0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@815 -- # return 0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:07.664 rmmod nvme_rdma 00:13:07.664 rmmod nvme_fabrics 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1410678 ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1410678 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@942 -- # '[' -z 1410678 ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # kill -0 1410678 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # uname 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1410678 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1410678' 00:13:07.664 killing process with pid 1410678 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@961 -- # kill 1410678 00:13:07.664 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # wait 1410678 00:13:07.924 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:07.924 00:13:07.924 real 0m40.013s 00:13:07.924 user 1m6.900s 00:13:07.924 sys 0m8.305s 00:13:07.924 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:07.924 23:38:56 nvmf_rdma.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:07.924 ************************************ 00:13:07.924 END TEST nvmf_lvs_grow 00:13:07.924 ************************************ 00:13:07.924 23:38:56 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:13:07.924 23:38:56 nvmf_rdma -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:07.924 23:38:56 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:07.924 23:38:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:07.924 23:38:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:07.924 ************************************ 00:13:07.924 START TEST nvmf_bdev_io_wait 00:13:07.924 ************************************ 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:13:07.924 * Looking for test storage... 00:13:07.924 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.924 23:38:56 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:13.196 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:13.196 23:39:01 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:13.196 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:13.196 Found net devices under 0000:da:00.0: mlx_0_0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:13.196 Found net devices under 0000:da:00.1: mlx_0_1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:13.196 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:13.196 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:13.196 altname enp218s0f0np0 00:13:13.196 altname ens818f0np0 00:13:13.196 inet 192.168.100.8/24 scope global mlx_0_0 00:13:13.196 valid_lft forever preferred_lft forever 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:13.196 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:13.196 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:13.196 altname enp218s0f1np1 00:13:13.196 altname ens818f1np1 00:13:13.196 inet 192.168.100.9/24 scope global mlx_0_1 00:13:13.196 valid_lft forever preferred_lft forever 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:13:13.196 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:13.197 192.168.100.9' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:13.197 192.168.100.9' 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:13.197 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:13.455 192.168.100.9' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1414619 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1414619 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@823 -- # '[' -z 1414619 ']' 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.455 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:13.456 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.456 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:13.456 23:39:02 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:13.456 [2024-07-15 23:39:02.254786] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:13.456 [2024-07-15 23:39:02.254828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.456 [2024-07-15 23:39:02.311015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:13.456 [2024-07-15 23:39:02.394201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.456 [2024-07-15 23:39:02.394240] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.456 [2024-07-15 23:39:02.394246] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.456 [2024-07-15 23:39:02.394252] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.456 [2024-07-15 23:39:02.394257] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.456 [2024-07-15 23:39:02.394292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.456 [2024-07-15 23:39:02.394393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.456 [2024-07-15 23:39:02.394480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.456 [2024-07-15 23:39:02.394480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # return 0 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 [2024-07-15 23:39:03.208134] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc40b20/0xc45010) succeed. 00:13:14.390 [2024-07-15 23:39:03.217052] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc42160/0xc866a0) succeed. 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.390 Malloc0 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.390 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:14.649 [2024-07-15 23:39:03.388858] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1414866 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1414868 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:14.649 { 00:13:14.649 "params": { 00:13:14.649 "name": "Nvme$subsystem", 00:13:14.649 "trtype": "$TEST_TRANSPORT", 00:13:14.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.649 "adrfam": "ipv4", 00:13:14.649 "trsvcid": "$NVMF_PORT", 00:13:14.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.649 "hdgst": ${hdgst:-false}, 00:13:14.649 "ddgst": ${ddgst:-false} 00:13:14.649 }, 00:13:14.649 "method": "bdev_nvme_attach_controller" 00:13:14.649 } 00:13:14.649 EOF 00:13:14.649 )") 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1414870 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:14.649 { 00:13:14.649 "params": { 00:13:14.649 "name": "Nvme$subsystem", 00:13:14.649 "trtype": "$TEST_TRANSPORT", 00:13:14.649 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.649 "adrfam": "ipv4", 00:13:14.649 "trsvcid": "$NVMF_PORT", 00:13:14.649 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.649 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.649 "hdgst": ${hdgst:-false}, 00:13:14.649 "ddgst": ${ddgst:-false} 00:13:14.649 }, 00:13:14.649 "method": "bdev_nvme_attach_controller" 00:13:14.649 } 00:13:14.649 EOF 00:13:14.649 )") 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1414873 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:14.649 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:14.650 { 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme$subsystem", 00:13:14.650 "trtype": "$TEST_TRANSPORT", 00:13:14.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "$NVMF_PORT", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.650 "hdgst": ${hdgst:-false}, 00:13:14.650 "ddgst": ${ddgst:-false} 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 } 00:13:14.650 EOF 00:13:14.650 )") 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:14.650 { 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme$subsystem", 00:13:14.650 "trtype": "$TEST_TRANSPORT", 00:13:14.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "$NVMF_PORT", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:14.650 "hdgst": ${hdgst:-false}, 00:13:14.650 "ddgst": ${ddgst:-false} 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 } 00:13:14.650 EOF 00:13:14.650 )") 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1414866 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme1", 00:13:14.650 "trtype": "rdma", 00:13:14.650 "traddr": "192.168.100.8", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "4420", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.650 "hdgst": false, 00:13:14.650 "ddgst": false 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 }' 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme1", 00:13:14.650 "trtype": "rdma", 00:13:14.650 "traddr": "192.168.100.8", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "4420", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.650 "hdgst": false, 00:13:14.650 "ddgst": false 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 }' 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme1", 00:13:14.650 "trtype": "rdma", 00:13:14.650 "traddr": "192.168.100.8", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "4420", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.650 "hdgst": false, 00:13:14.650 "ddgst": false 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 }' 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:13:14.650 23:39:03 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:14.650 "params": { 00:13:14.650 "name": "Nvme1", 00:13:14.650 "trtype": "rdma", 00:13:14.650 "traddr": "192.168.100.8", 00:13:14.650 "adrfam": "ipv4", 00:13:14.650 "trsvcid": "4420", 00:13:14.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:14.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:14.650 "hdgst": false, 00:13:14.650 "ddgst": false 00:13:14.650 }, 00:13:14.650 "method": "bdev_nvme_attach_controller" 00:13:14.650 }' 00:13:14.650 [2024-07-15 23:39:03.436331] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:14.650 [2024-07-15 23:39:03.436329] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:14.650 [2024-07-15 23:39:03.436378] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-15 23:39:03.436379] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:14.650 --proc-type=auto ] 00:13:14.650 [2024-07-15 23:39:03.437619] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:14.650 [2024-07-15 23:39:03.437656] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:14.650 [2024-07-15 23:39:03.438769] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:14.650 [2024-07-15 23:39:03.438815] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:14.650 [2024-07-15 23:39:03.617930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.909 [2024-07-15 23:39:03.691952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:13:14.909 [2024-07-15 23:39:03.711119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.909 [2024-07-15 23:39:03.785678] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:13:14.909 [2024-07-15 23:39:03.809254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.909 [2024-07-15 23:39:03.869650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.167 [2024-07-15 23:39:03.897785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:13:15.167 [2024-07-15 23:39:03.948259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:13:15.167 Running I/O for 1 seconds... 00:13:15.167 Running I/O for 1 seconds... 00:13:15.167 Running I/O for 1 seconds... 00:13:15.167 Running I/O for 1 seconds... 00:13:16.101 00:13:16.101 Latency(us) 00:13:16.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.101 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:16.101 Nvme1n1 : 1.00 20509.82 80.12 0.00 0.00 6224.65 3791.73 13918.60 00:13:16.101 =================================================================================================================== 00:13:16.101 Total : 20509.82 80.12 0.00 0.00 6224.65 3791.73 13918.60 00:13:16.101 00:13:16.101 Latency(us) 00:13:16.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.101 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:16.101 Nvme1n1 : 1.01 14606.22 57.06 0.00 0.00 8734.45 5710.99 17351.44 00:13:16.101 =================================================================================================================== 00:13:16.101 Total : 14606.22 57.06 0.00 0.00 8734.45 5710.99 17351.44 00:13:16.101 00:13:16.101 Latency(us) 00:13:16.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.101 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:16.101 Nvme1n1 : 1.00 14825.62 57.91 0.00 0.00 8613.63 4993.22 19473.55 00:13:16.101 =================================================================================================================== 00:13:16.101 Total : 14825.62 57.91 0.00 0.00 8613.63 4993.22 19473.55 00:13:16.360 00:13:16.360 Latency(us) 00:13:16.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.360 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:16.360 Nvme1n1 : 1.00 249804.80 975.80 0.00 0.00 510.37 208.70 1755.43 00:13:16.360 =================================================================================================================== 00:13:16.360 Total : 249804.80 975.80 0.00 0.00 510.37 208.70 1755.43 00:13:16.360 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1414868 00:13:16.360 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1414870 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1414873 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:16.619 rmmod nvme_rdma 00:13:16.619 rmmod nvme_fabrics 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1414619 ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1414619 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@942 -- # '[' -z 1414619 ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # kill -0 1414619 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # uname 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1414619 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1414619' 00:13:16.619 killing process with pid 1414619 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@961 -- # kill 1414619 00:13:16.619 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # wait 1414619 00:13:16.878 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.879 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:16.879 00:13:16.879 real 0m8.912s 00:13:16.879 user 0m20.233s 00:13:16.879 sys 0m5.292s 00:13:16.879 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:16.879 23:39:05 nvmf_rdma.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 ************************************ 00:13:16.879 END TEST nvmf_bdev_io_wait 00:13:16.879 ************************************ 00:13:16.879 23:39:05 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:13:16.879 23:39:05 nvmf_rdma -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:16.879 23:39:05 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:16.879 23:39:05 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:16.879 23:39:05 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:16.879 ************************************ 00:13:16.879 START TEST nvmf_queue_depth 00:13:16.879 ************************************ 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:13:16.879 * Looking for test storage... 00:13:16.879 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.879 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:13:17.138 23:39:05 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:22.404 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:22.404 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:22.404 Found net devices under 0000:da:00.0: mlx_0_0 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.404 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:22.404 Found net devices under 0000:da:00.1: mlx_0_1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:22.405 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:22.405 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:22.405 altname enp218s0f0np0 00:13:22.405 altname ens818f0np0 00:13:22.405 inet 192.168.100.8/24 scope global mlx_0_0 00:13:22.405 valid_lft forever preferred_lft forever 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:22.405 23:39:10 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:22.405 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:22.405 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:22.405 altname enp218s0f1np1 00:13:22.405 altname ens818f1np1 00:13:22.405 inet 192.168.100.9/24 scope global mlx_0_1 00:13:22.405 valid_lft forever preferred_lft forever 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:22.405 192.168.100.9' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:22.405 192.168.100.9' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:22.405 192.168.100.9' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1418651 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1418651 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1418651 ']' 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:22.405 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:22.405 [2024-07-15 23:39:11.147834] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:22.405 [2024-07-15 23:39:11.147881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.405 [2024-07-15 23:39:11.206317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.405 [2024-07-15 23:39:11.279699] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.406 [2024-07-15 23:39:11.279738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.406 [2024-07-15 23:39:11.279745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.406 [2024-07-15 23:39:11.279751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.406 [2024-07-15 23:39:11.279755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.406 [2024-07-15 23:39:11.279773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.971 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:22.971 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:13:22.971 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:22.971 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:22.971 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 23:39:11 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.231 23:39:11 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:23.231 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:23.231 23:39:11 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 [2024-07-15 23:39:12.014088] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1145c20/0x114a110) succeed. 00:13:23.231 [2024-07-15 23:39:12.023473] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1147120/0x118b7a0) succeed. 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 Malloc0 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.231 [2024-07-15 23:39:12.102959] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1418827 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:23.231 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1418827 /var/tmp/bdevperf.sock 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@823 -- # '[' -z 1418827 ']' 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:23.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:23.232 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:23.232 [2024-07-15 23:39:12.149046] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:23.232 [2024-07-15 23:39:12.149086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418827 ] 00:13:23.232 [2024-07-15 23:39:12.203371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.550 [2024-07-15 23:39:12.279217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.141 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:24.141 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@856 -- # return 0 00:13:24.141 23:39:12 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:24.141 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:24.141 23:39:12 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.141 NVMe0n1 00:13:24.141 23:39:13 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:24.141 23:39:13 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:24.141 Running I/O for 10 seconds... 00:13:36.341 00:13:36.341 Latency(us) 00:13:36.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.341 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:36.341 Verification LBA range: start 0x0 length 0x4000 00:13:36.341 NVMe0n1 : 10.05 17634.50 68.88 0.00 0.00 57924.87 22719.15 36200.84 00:13:36.341 =================================================================================================================== 00:13:36.341 Total : 17634.50 68.88 0.00 0.00 57924.87 22719.15 36200.84 00:13:36.341 0 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1418827 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1418827 ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1418827 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1418827 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1418827' 00:13:36.341 killing process with pid 1418827 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1418827 00:13:36.341 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.341 00:13:36.341 Latency(us) 00:13:36.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.341 =================================================================================================================== 00:13:36.341 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1418827 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:36.341 rmmod nvme_rdma 00:13:36.341 rmmod nvme_fabrics 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1418651 ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@942 -- # '[' -z 1418651 ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@946 -- # kill -0 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # uname 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1418651' 00:13:36.341 killing process with pid 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@961 -- # kill 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@966 -- # wait 1418651 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:36.341 00:13:36.341 real 0m18.029s 00:13:36.341 user 0m25.737s 00:13:36.341 sys 0m4.603s 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:36.341 23:39:23 nvmf_rdma.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:36.341 ************************************ 00:13:36.341 END TEST nvmf_queue_depth 00:13:36.341 ************************************ 00:13:36.341 23:39:23 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:13:36.341 23:39:23 nvmf_rdma -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:36.341 23:39:23 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:36.341 23:39:23 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:36.341 23:39:23 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:36.341 ************************************ 00:13:36.341 START TEST nvmf_target_multipath 00:13:36.341 ************************************ 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:13:36.341 * Looking for test storage... 00:13:36.341 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.341 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.342 23:39:23 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:39.626 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:39.626 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:39.626 Found net devices under 0000:da:00.0: mlx_0_0 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:39.626 Found net devices under 0000:da:00.1: mlx_0_1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:39.626 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.626 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:39.626 altname enp218s0f0np0 00:13:39.626 altname ens818f0np0 00:13:39.626 inet 192.168.100.8/24 scope global mlx_0_0 00:13:39.626 valid_lft forever preferred_lft forever 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:39.626 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:39.627 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:39.627 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:39.627 altname enp218s0f1np1 00:13:39.627 altname ens818f1np1 00:13:39.627 inet 192.168.100.9/24 scope global mlx_0_1 00:13:39.627 valid_lft forever preferred_lft forever 00:13:39.627 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:13:39.627 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:39.627 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:39.627 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:39.627 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:39.885 192.168.100.9' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:39.885 192.168.100.9' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:39.885 192.168.100.9' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:13:39.885 run this test only with TCP transport for now 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:39.885 rmmod nvme_rdma 00:13:39.885 rmmod nvme_fabrics 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:39.885 00:13:39.885 real 0m4.869s 00:13:39.885 user 0m1.264s 00:13:39.885 sys 0m3.704s 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:39.885 23:39:28 nvmf_rdma.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:39.885 ************************************ 00:13:39.885 END TEST nvmf_target_multipath 00:13:39.885 ************************************ 00:13:39.885 23:39:28 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:13:39.885 23:39:28 nvmf_rdma -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:39.885 23:39:28 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:39.885 23:39:28 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:39.885 23:39:28 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:39.885 ************************************ 00:13:39.885 START TEST nvmf_zcopy 00:13:39.885 ************************************ 00:13:39.885 23:39:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:13:40.144 * Looking for test storage... 00:13:40.144 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:40.144 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.145 23:39:28 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:45.415 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:45.415 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:45.415 Found net devices under 0000:da:00.0: mlx_0_0 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:45.415 Found net devices under 0000:da:00.1: mlx_0_1 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:13:45.415 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:45.416 23:39:33 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:45.416 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:45.416 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:45.416 altname enp218s0f0np0 00:13:45.416 altname ens818f0np0 00:13:45.416 inet 192.168.100.8/24 scope global mlx_0_0 00:13:45.416 valid_lft forever preferred_lft forever 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:45.416 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:45.416 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:45.416 altname enp218s0f1np1 00:13:45.416 altname ens818f1np1 00:13:45.416 inet 192.168.100.9/24 scope global mlx_0_1 00:13:45.416 valid_lft forever preferred_lft forever 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:45.416 192.168.100.9' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:45.416 192.168.100.9' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:45.416 192.168.100.9' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1426767 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1426767 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@823 -- # '[' -z 1426767 ']' 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:45.416 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:45.416 [2024-07-15 23:39:34.192072] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:45.416 [2024-07-15 23:39:34.192120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.416 [2024-07-15 23:39:34.248906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.416 [2024-07-15 23:39:34.321507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.416 [2024-07-15 23:39:34.321552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.416 [2024-07-15 23:39:34.321559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.416 [2024-07-15 23:39:34.321580] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.416 [2024-07-15 23:39:34.321585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.416 [2024-07-15 23:39:34.321604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.352 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:46.352 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@856 -- # return 0 00:13:46.352 23:39:34 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.352 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:46.352 23:39:34 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:13:46.352 Unsupported transport: rdma 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@800 -- # type=--id 00:13:46.352 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@801 -- # id=0 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@802 -- # '[' --id = --pid ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@806 -- # shm_files=nvmf_trace.0 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@808 -- # [[ -z nvmf_trace.0 ]] 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@812 -- # for n in $shm_files 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@813 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:46.353 nvmf_trace.0 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@815 -- # return 0 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:46.353 rmmod nvme_rdma 00:13:46.353 rmmod nvme_fabrics 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1426767 ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@942 -- # '[' -z 1426767 ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@946 -- # kill -0 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@947 -- # uname 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1426767' 00:13:46.353 killing process with pid 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@961 -- # kill 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@966 -- # wait 1426767 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:46.353 00:13:46.353 real 0m6.532s 00:13:46.353 user 0m2.847s 00:13:46.353 sys 0m4.308s 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@1118 -- # xtrace_disable 00:13:46.353 23:39:35 nvmf_rdma.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:46.353 ************************************ 00:13:46.353 END TEST nvmf_zcopy 00:13:46.353 ************************************ 00:13:46.611 23:39:35 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:13:46.612 23:39:35 nvmf_rdma -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:46.612 23:39:35 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:13:46.612 23:39:35 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:13:46.612 23:39:35 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:13:46.612 ************************************ 00:13:46.612 START TEST nvmf_nmic 00:13:46.612 ************************************ 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:13:46.612 * Looking for test storage... 00:13:46.612 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:46.612 23:39:35 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.882 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:51.883 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:51.883 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:51.883 Found net devices under 0000:da:00.0: mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:51.883 Found net devices under 0000:da:00.1: mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:51.883 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.883 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:13:51.883 altname enp218s0f0np0 00:13:51.883 altname ens818f0np0 00:13:51.883 inet 192.168.100.8/24 scope global mlx_0_0 00:13:51.883 valid_lft forever preferred_lft forever 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:51.883 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.883 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:13:51.883 altname enp218s0f1np1 00:13:51.883 altname ens818f1np1 00:13:51.883 inet 192.168.100.9/24 scope global mlx_0_1 00:13:51.883 valid_lft forever preferred_lft forever 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:51.883 192.168.100.9' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:51.883 192.168.100.9' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:51.883 192.168.100.9' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.883 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1429981 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1429981 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@823 -- # '[' -z 1429981 ']' 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@828 -- # local max_retries=100 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@832 -- # xtrace_disable 00:13:51.884 23:39:40 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:51.884 [2024-07-15 23:39:40.483378] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:13:51.884 [2024-07-15 23:39:40.483442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.884 [2024-07-15 23:39:40.540872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:51.884 [2024-07-15 23:39:40.620612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.884 [2024-07-15 23:39:40.620653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.884 [2024-07-15 23:39:40.620660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.884 [2024-07-15 23:39:40.620665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.884 [2024-07-15 23:39:40.620670] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.884 [2024-07-15 23:39:40.620756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.884 [2024-07-15 23:39:40.620853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:51.884 [2024-07-15 23:39:40.620939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:51.884 [2024-07-15 23:39:40.620941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@856 -- # return 0 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.452 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.452 [2024-07-15 23:39:41.346628] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6d9cc0/0x6de1b0) succeed. 00:13:52.452 [2024-07-15 23:39:41.355725] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6db300/0x71f840) succeed. 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 Malloc0 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 [2024-07-15 23:39:41.521072] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:52.711 test case1: single bdev can't be used in multiple subsystems 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 [2024-07-15 23:39:41.544847] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:52.711 [2024-07-15 23:39:41.544865] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:52.711 [2024-07-15 23:39:41.544872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:52.711 request: 00:13:52.711 { 00:13:52.711 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:52.711 "namespace": { 00:13:52.711 "bdev_name": "Malloc0", 00:13:52.711 "no_auto_visible": false 00:13:52.711 }, 00:13:52.711 "method": "nvmf_subsystem_add_ns", 00:13:52.711 "req_id": 1 00:13:52.711 } 00:13:52.711 Got JSON-RPC error response 00:13:52.711 response: 00:13:52.711 { 00:13:52.711 "code": -32602, 00:13:52.711 "message": "Invalid parameters" 00:13:52.711 } 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:52.711 Adding namespace failed - expected result. 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:52.711 test case2: host connect to nvmf target in multiple paths 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@553 -- # xtrace_disable 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:52.711 [2024-07-15 23:39:41.556905] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:13:52.711 23:39:41 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:13:52.712 23:39:41 nvmf_rdma.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:53.649 23:39:42 nvmf_rdma.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:13:54.584 23:39:43 nvmf_rdma.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:54.584 23:39:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1192 -- # local i=0 00:13:54.584 23:39:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:13:54.584 23:39:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1194 -- # [[ -n '' ]] 00:13:54.584 23:39:43 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1199 -- # sleep 2 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1201 -- # nvme_devices=1 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1202 -- # return 0 00:13:57.116 23:39:45 nvmf_rdma.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:57.116 [global] 00:13:57.116 thread=1 00:13:57.116 invalidate=1 00:13:57.116 rw=write 00:13:57.116 time_based=1 00:13:57.116 runtime=1 00:13:57.116 ioengine=libaio 00:13:57.116 direct=1 00:13:57.116 bs=4096 00:13:57.116 iodepth=1 00:13:57.116 norandommap=0 00:13:57.116 numjobs=1 00:13:57.116 00:13:57.116 verify_dump=1 00:13:57.116 verify_backlog=512 00:13:57.116 verify_state_save=0 00:13:57.116 do_verify=1 00:13:57.116 verify=crc32c-intel 00:13:57.116 [job0] 00:13:57.116 filename=/dev/nvme0n1 00:13:57.116 Could not set queue depth (nvme0n1) 00:13:57.116 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:57.116 fio-3.35 00:13:57.116 Starting 1 thread 00:13:58.050 00:13:58.050 job0: (groupid=0, jobs=1): err= 0: pid=1430939: Mon Jul 15 23:39:46 2024 00:13:58.050 read: IOPS=7521, BW=29.4MiB/s (30.8MB/s)(29.4MiB/1001msec) 00:13:58.050 slat (nsec): min=6245, max=22947, avg=6931.40, stdev=701.26 00:13:58.050 clat (nsec): min=48732, max=78550, avg=57224.25, stdev=3666.70 00:13:58.050 lat (nsec): min=55456, max=85381, avg=64155.64, stdev=3729.20 00:13:58.050 clat percentiles (nsec): 00:13:58.050 | 1.00th=[50432], 5.00th=[51456], 10.00th=[52480], 20.00th=[54016], 00:13:58.050 | 30.00th=[55040], 40.00th=[56064], 50.00th=[57088], 60.00th=[58112], 00:13:58.050 | 70.00th=[59136], 80.00th=[60160], 90.00th=[62208], 95.00th=[63232], 00:13:58.050 | 99.00th=[66048], 99.50th=[68096], 99.90th=[73216], 99.95th=[73216], 00:13:58.050 | 99.99th=[78336] 00:13:58.050 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:13:58.050 slat (nsec): min=7314, max=38376, avg=8806.97, stdev=917.97 00:13:58.050 clat (nsec): min=41027, max=82402, avg=54835.31, stdev=3803.79 00:13:58.050 lat (usec): min=54, max=113, avg=63.64, stdev= 3.97 00:13:58.050 clat percentiles (nsec): 00:13:58.051 | 1.00th=[47872], 5.00th=[49408], 10.00th=[49920], 20.00th=[51456], 00:13:58.051 | 30.00th=[52480], 40.00th=[53504], 50.00th=[54528], 60.00th=[55552], 00:13:58.051 | 70.00th=[56576], 80.00th=[58112], 90.00th=[59648], 95.00th=[61184], 00:13:58.051 | 99.00th=[64768], 99.50th=[66048], 99.90th=[70144], 99.95th=[75264], 00:13:58.051 | 99.99th=[82432] 00:13:58.051 bw ( KiB/s): min=32320, max=32320, per=100.00%, avg=32320.00, stdev= 0.00, samples=1 00:13:58.051 iops : min= 8080, max= 8080, avg=8080.00, stdev= 0.00, samples=1 00:13:58.051 lat (usec) : 50=4.77%, 100=95.23% 00:13:58.051 cpu : usr=7.60%, sys=16.50%, ctx=15209, majf=0, minf=2 00:13:58.051 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:58.051 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.051 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:58.051 issued rwts: total=7529,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:58.051 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:58.051 00:13:58.051 Run status group 0 (all jobs): 00:13:58.051 READ: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=29.4MiB (30.8MB), run=1001-1001msec 00:13:58.051 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:13:58.051 00:13:58.051 Disk stats (read/write): 00:13:58.051 nvme0n1: ios=6705/7016, merge=0/0, ticks=355/327, in_queue=682, util=90.68% 00:13:58.051 23:39:46 nvmf_rdma.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1213 -- # local i=0 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:13:59.949 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1225 -- # return 0 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:00.207 rmmod nvme_rdma 00:14:00.207 rmmod nvme_fabrics 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:14:00.207 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1429981 ']' 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1429981 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@942 -- # '[' -z 1429981 ']' 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@946 -- # kill -0 1429981 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@947 -- # uname 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:00.208 23:39:48 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1429981 00:14:00.208 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:00.208 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:00.208 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1429981' 00:14:00.208 killing process with pid 1429981 00:14:00.208 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@961 -- # kill 1429981 00:14:00.208 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@966 -- # wait 1429981 00:14:00.466 23:39:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.466 23:39:49 nvmf_rdma.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:00.466 00:14:00.466 real 0m13.928s 00:14:00.466 user 0m41.553s 00:14:00.466 sys 0m4.464s 00:14:00.466 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:00.466 23:39:49 nvmf_rdma.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:00.466 ************************************ 00:14:00.466 END TEST nvmf_nmic 00:14:00.466 ************************************ 00:14:00.466 23:39:49 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:14:00.466 23:39:49 nvmf_rdma -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:00.466 23:39:49 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:00.466 23:39:49 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:00.466 23:39:49 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:00.466 ************************************ 00:14:00.466 START TEST nvmf_fio_target 00:14:00.466 ************************************ 00:14:00.466 23:39:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:14:00.726 * Looking for test storage... 00:14:00.726 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.726 23:39:49 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.997 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:05.997 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.997 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.997 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.997 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:05.998 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:05.998 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:05.998 Found net devices under 0000:da:00.0: mlx_0_0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:05.998 Found net devices under 0000:da:00.1: mlx_0_1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:05.998 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:05.998 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:05.998 altname enp218s0f0np0 00:14:05.998 altname ens818f0np0 00:14:05.998 inet 192.168.100.8/24 scope global mlx_0_0 00:14:05.998 valid_lft forever preferred_lft forever 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:05.998 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:05.998 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:05.998 altname enp218s0f1np1 00:14:05.998 altname ens818f1np1 00:14:05.998 inet 192.168.100.9/24 scope global mlx_0_1 00:14:05.998 valid_lft forever preferred_lft forever 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:05.998 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:05.999 192.168.100.9' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:05.999 192.168.100.9' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:05.999 192.168.100.9' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1434678 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1434678 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@823 -- # '[' -z 1434678 ']' 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:05.999 23:39:54 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.999 [2024-07-15 23:39:54.786818] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:05.999 [2024-07-15 23:39:54.786870] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.999 [2024-07-15 23:39:54.843220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.999 [2024-07-15 23:39:54.926503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.999 [2024-07-15 23:39:54.926543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.999 [2024-07-15 23:39:54.926551] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.999 [2024-07-15 23:39:54.926557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.999 [2024-07-15 23:39:54.926562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.999 [2024-07-15 23:39:54.926609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.999 [2024-07-15 23:39:54.926708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.999 [2024-07-15 23:39:54.926794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.999 [2024-07-15 23:39:54.926795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@856 -- # return 0 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.934 23:39:55 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.935 23:39:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:06.935 [2024-07-15 23:39:55.808774] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f68cc0/0x1f6d1b0) succeed. 00:14:06.935 [2024-07-15 23:39:55.817912] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f6a300/0x1fae840) succeed. 00:14:07.193 23:39:55 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.193 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:07.193 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.451 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:07.451 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.710 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:07.710 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.969 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:07.969 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:07.969 23:39:56 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.226 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:08.226 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.484 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:08.484 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.742 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:08.742 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:08.742 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.001 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:09.001 23:39:57 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:09.259 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:09.259 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:09.518 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:09.518 [2024-07-15 23:39:58.409128] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:09.518 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:09.776 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:10.034 23:39:58 nvmf_rdma.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1192 -- # local i=0 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1193 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1194 -- # [[ -n 4 ]] 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1195 -- # nvme_device_counter=4 00:14:10.965 23:39:59 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1199 -- # sleep 2 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1200 -- # (( i++ <= 15 )) 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # lsblk -l -o NAME,SERIAL 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_devices=4 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1202 -- # return 0 00:14:12.913 23:40:01 nvmf_rdma.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:12.913 [global] 00:14:12.913 thread=1 00:14:12.913 invalidate=1 00:14:12.913 rw=write 00:14:12.913 time_based=1 00:14:12.913 runtime=1 00:14:12.913 ioengine=libaio 00:14:12.913 direct=1 00:14:12.913 bs=4096 00:14:12.913 iodepth=1 00:14:12.913 norandommap=0 00:14:12.913 numjobs=1 00:14:12.913 00:14:12.913 verify_dump=1 00:14:12.913 verify_backlog=512 00:14:12.913 verify_state_save=0 00:14:12.913 do_verify=1 00:14:12.913 verify=crc32c-intel 00:14:12.913 [job0] 00:14:12.913 filename=/dev/nvme0n1 00:14:12.913 [job1] 00:14:12.913 filename=/dev/nvme0n2 00:14:12.913 [job2] 00:14:12.913 filename=/dev/nvme0n3 00:14:12.913 [job3] 00:14:12.913 filename=/dev/nvme0n4 00:14:13.171 Could not set queue depth (nvme0n1) 00:14:13.171 Could not set queue depth (nvme0n2) 00:14:13.171 Could not set queue depth (nvme0n3) 00:14:13.171 Could not set queue depth (nvme0n4) 00:14:13.429 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.429 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.429 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.429 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:13.429 fio-3.35 00:14:13.429 Starting 4 threads 00:14:14.463 00:14:14.463 job0: (groupid=0, jobs=1): err= 0: pid=1436039: Mon Jul 15 23:40:03 2024 00:14:14.463 read: IOPS=3826, BW=14.9MiB/s (15.7MB/s)(15.0MiB/1001msec) 00:14:14.463 slat (nsec): min=6237, max=30441, avg=7195.20, stdev=995.97 00:14:14.463 clat (usec): min=68, max=330, avg=120.03, stdev=20.03 00:14:14.463 lat (usec): min=81, max=337, avg=127.22, stdev=20.01 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 86], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 110], 00:14:14.463 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 119], 60.00th=[ 121], 00:14:14.463 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 135], 95.00th=[ 149], 00:14:14.463 | 99.00th=[ 215], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 330], 00:14:14.463 | 99.99th=[ 330] 00:14:14.463 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:14:14.463 slat (nsec): min=8218, max=37735, avg=9392.20, stdev=1095.64 00:14:14.463 clat (usec): min=65, max=311, avg=111.87, stdev=19.40 00:14:14.463 lat (usec): min=74, max=320, avg=121.27, stdev=19.37 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 75], 5.00th=[ 88], 10.00th=[ 94], 20.00th=[ 101], 00:14:14.463 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 111], 60.00th=[ 114], 00:14:14.463 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 145], 00:14:14.463 | 99.00th=[ 194], 99.50th=[ 223], 99.90th=[ 251], 99.95th=[ 285], 00:14:14.463 | 99.99th=[ 314] 00:14:14.463 bw ( KiB/s): min=16384, max=16384, per=22.08%, avg=16384.00, stdev= 0.00, samples=1 00:14:14.463 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:14.463 lat (usec) : 100=12.89%, 250=86.85%, 500=0.25% 00:14:14.463 cpu : usr=3.40%, sys=6.60%, ctx=7926, majf=0, minf=1 00:14:14.463 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.463 issued rwts: total=3830,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.463 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.463 job1: (groupid=0, jobs=1): err= 0: pid=1436040: Mon Jul 15 23:40:03 2024 00:14:14.463 read: IOPS=4005, BW=15.6MiB/s (16.4MB/s)(15.7MiB/1002msec) 00:14:14.463 slat (nsec): min=6263, max=16014, avg=7066.33, stdev=619.01 00:14:14.463 clat (usec): min=67, max=206, avg=116.96, stdev=13.23 00:14:14.463 lat (usec): min=73, max=213, avg=124.03, stdev=13.21 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 81], 5.00th=[ 95], 10.00th=[ 102], 20.00th=[ 109], 00:14:14.463 | 30.00th=[ 113], 40.00th=[ 116], 50.00th=[ 118], 60.00th=[ 121], 00:14:14.463 | 70.00th=[ 123], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:14:14.463 | 99.00th=[ 159], 99.50th=[ 163], 99.90th=[ 174], 99.95th=[ 178], 00:14:14.463 | 99.99th=[ 206] 00:14:14.463 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:14:14.463 slat (nsec): min=8019, max=38092, avg=9043.89, stdev=1100.71 00:14:14.463 clat (usec): min=61, max=166, avg=109.44, stdev=14.23 00:14:14.463 lat (usec): min=69, max=175, avg=118.49, stdev=14.23 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 75], 5.00th=[ 84], 10.00th=[ 93], 20.00th=[ 99], 00:14:14.463 | 30.00th=[ 104], 40.00th=[ 108], 50.00th=[ 111], 60.00th=[ 113], 00:14:14.463 | 70.00th=[ 116], 80.00th=[ 119], 90.00th=[ 125], 95.00th=[ 135], 00:14:14.463 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 161], 99.95th=[ 163], 00:14:14.463 | 99.99th=[ 167] 00:14:14.463 bw ( KiB/s): min=16384, max=16384, per=22.08%, avg=16384.00, stdev= 0.00, samples=2 00:14:14.463 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:14:14.463 lat (usec) : 100=14.71%, 250=85.29% 00:14:14.463 cpu : usr=5.09%, sys=8.39%, ctx=8112, majf=0, minf=1 00:14:14.463 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.463 issued rwts: total=4014,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.463 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.463 job2: (groupid=0, jobs=1): err= 0: pid=1436041: Mon Jul 15 23:40:03 2024 00:14:14.463 read: IOPS=4659, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1001msec) 00:14:14.463 slat (nsec): min=6481, max=19065, avg=7271.05, stdev=707.58 00:14:14.463 clat (usec): min=74, max=296, avg=94.56, stdev= 7.55 00:14:14.463 lat (usec): min=81, max=303, avg=101.83, stdev= 7.58 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 89], 00:14:14.463 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 96], 00:14:14.463 | 70.00th=[ 98], 80.00th=[ 100], 90.00th=[ 104], 95.00th=[ 108], 00:14:14.463 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 123], 99.95th=[ 127], 00:14:14.463 | 99.99th=[ 297] 00:14:14.463 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:14:14.463 slat (nsec): min=8140, max=40118, avg=9109.43, stdev=884.17 00:14:14.463 clat (usec): min=72, max=263, avg=89.89, stdev= 7.47 00:14:14.463 lat (usec): min=81, max=280, avg=99.00, stdev= 7.62 00:14:14.463 clat percentiles (usec): 00:14:14.463 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:14:14.463 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 89], 60.00th=[ 91], 00:14:14.463 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 99], 95.00th=[ 103], 00:14:14.463 | 99.00th=[ 111], 99.50th=[ 114], 99.90th=[ 119], 99.95th=[ 133], 00:14:14.464 | 99.99th=[ 265] 00:14:14.464 bw ( KiB/s): min=20480, max=20480, per=27.59%, avg=20480.00, stdev= 0.00, samples=1 00:14:14.464 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:14.464 lat (usec) : 100=86.44%, 250=13.54%, 500=0.02% 00:14:14.464 cpu : usr=6.10%, sys=10.00%, ctx=9784, majf=0, minf=2 00:14:14.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.464 issued rwts: total=4664,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.464 job3: (groupid=0, jobs=1): err= 0: pid=1436042: Mon Jul 15 23:40:03 2024 00:14:14.464 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:14:14.464 slat (nsec): min=6468, max=20178, avg=7203.58, stdev=671.85 00:14:14.464 clat (usec): min=72, max=196, avg=88.61, stdev= 6.47 00:14:14.464 lat (usec): min=79, max=203, avg=95.81, stdev= 6.51 00:14:14.464 clat percentiles (usec): 00:14:14.464 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:14:14.464 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 90], 00:14:14.464 | 70.00th=[ 92], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 100], 00:14:14.464 | 99.00th=[ 109], 99.50th=[ 112], 99.90th=[ 121], 99.95th=[ 122], 00:14:14.464 | 99.99th=[ 198] 00:14:14.464 write: IOPS=5274, BW=20.6MiB/s (21.6MB/s)(20.6MiB/1001msec); 0 zone resets 00:14:14.464 slat (nsec): min=8116, max=38321, avg=9161.47, stdev=972.84 00:14:14.464 clat (usec): min=68, max=122, avg=83.54, stdev= 5.94 00:14:14.464 lat (usec): min=77, max=148, avg=92.70, stdev= 6.04 00:14:14.464 clat percentiles (usec): 00:14:14.464 | 1.00th=[ 73], 5.00th=[ 76], 10.00th=[ 77], 20.00th=[ 79], 00:14:14.464 | 30.00th=[ 81], 40.00th=[ 82], 50.00th=[ 83], 60.00th=[ 85], 00:14:14.464 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 92], 95.00th=[ 94], 00:14:14.464 | 99.00th=[ 100], 99.50th=[ 103], 99.90th=[ 112], 99.95th=[ 117], 00:14:14.464 | 99.99th=[ 124] 00:14:14.464 bw ( KiB/s): min=20904, max=20904, per=28.17%, avg=20904.00, stdev= 0.00, samples=1 00:14:14.464 iops : min= 5226, max= 5226, avg=5226.00, stdev= 0.00, samples=1 00:14:14.464 lat (usec) : 100=97.15%, 250=2.85% 00:14:14.464 cpu : usr=6.70%, sys=10.40%, ctx=10400, majf=0, minf=1 00:14:14.464 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:14.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:14.464 issued rwts: total=5120,5280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:14.464 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:14.464 00:14:14.464 Run status group 0 (all jobs): 00:14:14.464 READ: bw=68.7MiB/s (72.1MB/s), 14.9MiB/s-20.0MiB/s (15.7MB/s-20.9MB/s), io=68.9MiB (72.2MB), run=1001-1002msec 00:14:14.464 WRITE: bw=72.5MiB/s (76.0MB/s), 16.0MiB/s-20.6MiB/s (16.7MB/s-21.6MB/s), io=72.6MiB (76.2MB), run=1001-1002msec 00:14:14.464 00:14:14.464 Disk stats (read/write): 00:14:14.464 nvme0n1: ios=3121/3542, merge=0/0, ticks=366/383, in_queue=749, util=84.07% 00:14:14.464 nvme0n2: ios=3183/3584, merge=0/0, ticks=349/365, in_queue=714, util=85.02% 00:14:14.464 nvme0n3: ios=4042/4096, merge=0/0, ticks=337/353, in_queue=690, util=88.25% 00:14:14.464 nvme0n4: ios=4096/4588, merge=0/0, ticks=330/333, in_queue=663, util=89.39% 00:14:14.464 23:40:03 nvmf_rdma.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:14.464 [global] 00:14:14.464 thread=1 00:14:14.464 invalidate=1 00:14:14.464 rw=randwrite 00:14:14.464 time_based=1 00:14:14.464 runtime=1 00:14:14.464 ioengine=libaio 00:14:14.464 direct=1 00:14:14.464 bs=4096 00:14:14.464 iodepth=1 00:14:14.464 norandommap=0 00:14:14.464 numjobs=1 00:14:14.464 00:14:14.464 verify_dump=1 00:14:14.464 verify_backlog=512 00:14:14.464 verify_state_save=0 00:14:14.464 do_verify=1 00:14:14.464 verify=crc32c-intel 00:14:14.464 [job0] 00:14:14.464 filename=/dev/nvme0n1 00:14:14.464 [job1] 00:14:14.464 filename=/dev/nvme0n2 00:14:14.464 [job2] 00:14:14.464 filename=/dev/nvme0n3 00:14:14.464 [job3] 00:14:14.464 filename=/dev/nvme0n4 00:14:14.726 Could not set queue depth (nvme0n1) 00:14:14.726 Could not set queue depth (nvme0n2) 00:14:14.726 Could not set queue depth (nvme0n3) 00:14:14.726 Could not set queue depth (nvme0n4) 00:14:14.983 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:14.983 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:14.983 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:14.983 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:14.983 fio-3.35 00:14:14.983 Starting 4 threads 00:14:16.358 00:14:16.358 job0: (groupid=0, jobs=1): err= 0: pid=1436414: Mon Jul 15 23:40:04 2024 00:14:16.358 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:14:16.358 slat (nsec): min=3849, max=15411, avg=5335.36, stdev=964.48 00:14:16.358 clat (usec): min=65, max=279, avg=89.95, stdev=19.65 00:14:16.358 lat (usec): min=69, max=286, avg=95.28, stdev=20.38 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:14:16.358 | 30.00th=[ 80], 40.00th=[ 82], 50.00th=[ 84], 60.00th=[ 86], 00:14:16.358 | 70.00th=[ 89], 80.00th=[ 95], 90.00th=[ 125], 95.00th=[ 130], 00:14:16.358 | 99.00th=[ 143], 99.50th=[ 174], 99.90th=[ 239], 99.95th=[ 265], 00:14:16.358 | 99.99th=[ 281] 00:14:16.358 write: IOPS=5507, BW=21.5MiB/s (22.6MB/s)(21.5MiB/1001msec); 0 zone resets 00:14:16.358 slat (nsec): min=4174, max=35102, avg=6023.85, stdev=1641.12 00:14:16.358 clat (usec): min=55, max=236, avg=84.26, stdev=17.89 00:14:16.358 lat (usec): min=62, max=244, avg=90.29, stdev=19.18 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:14:16.358 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 79], 60.00th=[ 81], 00:14:16.358 | 70.00th=[ 84], 80.00th=[ 91], 90.00th=[ 115], 95.00th=[ 119], 00:14:16.358 | 99.00th=[ 149], 99.50th=[ 155], 99.90th=[ 217], 99.95th=[ 233], 00:14:16.358 | 99.99th=[ 237] 00:14:16.358 bw ( KiB/s): min=20480, max=20480, per=28.14%, avg=20480.00, stdev= 0.00, samples=1 00:14:16.358 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:14:16.358 lat (usec) : 100=82.43%, 250=17.54%, 500=0.03% 00:14:16.358 cpu : usr=2.00%, sys=6.80%, ctx=10633, majf=0, minf=1 00:14:16.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.358 issued rwts: total=5120,5513,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.358 job1: (groupid=0, jobs=1): err= 0: pid=1436415: Mon Jul 15 23:40:04 2024 00:14:16.358 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:16.358 slat (nsec): min=6091, max=27271, avg=7256.60, stdev=830.42 00:14:16.358 clat (usec): min=67, max=363, avg=126.77, stdev=14.06 00:14:16.358 lat (usec): min=73, max=370, avg=134.03, stdev=14.03 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 85], 5.00th=[ 109], 10.00th=[ 116], 20.00th=[ 120], 00:14:16.358 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:14:16.358 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 145], 00:14:16.358 | 99.00th=[ 174], 99.50th=[ 180], 99.90th=[ 190], 99.95th=[ 196], 00:14:16.358 | 99.99th=[ 363] 00:14:16.358 write: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(15.6MiB/1001msec); 0 zone resets 00:14:16.358 slat (nsec): min=7698, max=37714, avg=9025.52, stdev=999.54 00:14:16.358 clat (usec): min=63, max=245, avg=116.91, stdev=13.50 00:14:16.358 lat (usec): min=72, max=254, avg=125.94, stdev=13.51 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 78], 5.00th=[ 97], 10.00th=[ 105], 20.00th=[ 111], 00:14:16.358 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:14:16.358 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 137], 00:14:16.358 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 180], 00:14:16.358 | 99.99th=[ 245] 00:14:16.358 bw ( KiB/s): min=16384, max=16384, per=22.51%, avg=16384.00, stdev= 0.00, samples=1 00:14:16.358 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:16.358 lat (usec) : 100=4.88%, 250=95.10%, 500=0.01% 00:14:16.358 cpu : usr=5.30%, sys=7.80%, ctx=7579, majf=0, minf=1 00:14:16.358 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.358 issued rwts: total=3584,3995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.358 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.358 job2: (groupid=0, jobs=1): err= 0: pid=1436418: Mon Jul 15 23:40:04 2024 00:14:16.358 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:14:16.358 slat (nsec): min=6486, max=19550, avg=7214.49, stdev=672.04 00:14:16.358 clat (usec): min=78, max=279, avg=127.05, stdev=10.85 00:14:16.358 lat (usec): min=85, max=291, avg=134.26, stdev=10.86 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 93], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 121], 00:14:16.358 | 30.00th=[ 123], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:14:16.358 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:14:16.358 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 178], 99.95th=[ 180], 00:14:16.358 | 99.99th=[ 281] 00:14:16.358 write: IOPS=3992, BW=15.6MiB/s (16.4MB/s)(15.6MiB/1001msec); 0 zone resets 00:14:16.358 slat (nsec): min=8109, max=37698, avg=9228.98, stdev=1006.48 00:14:16.358 clat (usec): min=69, max=167, avg=116.81, stdev=10.14 00:14:16.358 lat (usec): min=79, max=176, avg=126.04, stdev=10.18 00:14:16.358 clat percentiles (usec): 00:14:16.358 | 1.00th=[ 87], 5.00th=[ 103], 10.00th=[ 108], 20.00th=[ 111], 00:14:16.358 | 30.00th=[ 113], 40.00th=[ 115], 50.00th=[ 117], 60.00th=[ 119], 00:14:16.358 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 133], 00:14:16.358 | 99.00th=[ 149], 99.50th=[ 157], 99.90th=[ 165], 99.95th=[ 167], 00:14:16.358 | 99.99th=[ 167] 00:14:16.358 bw ( KiB/s): min=16384, max=16384, per=22.51%, avg=16384.00, stdev= 0.00, samples=1 00:14:16.358 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:16.359 lat (usec) : 100=2.82%, 250=97.16%, 500=0.01% 00:14:16.359 cpu : usr=2.80%, sys=7.30%, ctx=7580, majf=0, minf=2 00:14:16.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.359 issued rwts: total=3584,3996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.359 job3: (groupid=0, jobs=1): err= 0: pid=1436419: Mon Jul 15 23:40:04 2024 00:14:16.359 read: IOPS=4608, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1000msec) 00:14:16.359 slat (nsec): min=6144, max=20213, avg=6918.10, stdev=632.94 00:14:16.359 clat (usec): min=67, max=172, avg=99.24, stdev=14.75 00:14:16.359 lat (usec): min=74, max=179, avg=106.16, stdev=14.96 00:14:16.359 clat percentiles (usec): 00:14:16.359 | 1.00th=[ 82], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:14:16.359 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 97], 00:14:16.359 | 70.00th=[ 100], 80.00th=[ 111], 90.00th=[ 126], 95.00th=[ 130], 00:14:16.359 | 99.00th=[ 139], 99.50th=[ 145], 99.90th=[ 159], 99.95th=[ 163], 00:14:16.359 | 99.99th=[ 174] 00:14:16.359 write: IOPS=4712, BW=18.4MiB/s (19.3MB/s)(18.4MiB/1000msec); 0 zone resets 00:14:16.359 slat (nsec): min=8154, max=70125, avg=9307.78, stdev=1490.51 00:14:16.359 clat (usec): min=68, max=154, avg=93.70, stdev=13.26 00:14:16.359 lat (usec): min=80, max=195, avg=103.01, stdev=13.31 00:14:16.359 clat percentiles (usec): 00:14:16.359 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:14:16.359 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 92], 00:14:16.359 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 120], 00:14:16.359 | 99.00th=[ 133], 99.50th=[ 141], 99.90th=[ 151], 99.95th=[ 151], 00:14:16.359 | 99.99th=[ 155] 00:14:16.359 bw ( KiB/s): min=18760, max=18760, per=25.77%, avg=18760.00, stdev= 0.00, samples=1 00:14:16.359 iops : min= 4690, max= 4690, avg=4690.00, stdev= 0.00, samples=1 00:14:16.359 lat (usec) : 100=72.94%, 250=27.06% 00:14:16.359 cpu : usr=6.60%, sys=7.30%, ctx=9321, majf=0, minf=1 00:14:16.359 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:16.359 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.359 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:16.359 issued rwts: total=4608,4712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:16.359 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:16.359 00:14:16.359 Run status group 0 (all jobs): 00:14:16.359 READ: bw=65.9MiB/s (69.1MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1000-1001msec 00:14:16.359 WRITE: bw=71.1MiB/s (74.5MB/s), 15.6MiB/s-21.5MiB/s (16.3MB/s-22.6MB/s), io=71.2MiB (74.6MB), run=1000-1001msec 00:14:16.359 00:14:16.359 Disk stats (read/write): 00:14:16.359 nvme0n1: ios=4178/4608, merge=0/0, ticks=386/381, in_queue=767, util=84.37% 00:14:16.359 nvme0n2: ios=3072/3252, merge=0/0, ticks=361/343, in_queue=704, util=85.23% 00:14:16.359 nvme0n3: ios=3072/3253, merge=0/0, ticks=374/365, in_queue=739, util=88.38% 00:14:16.359 nvme0n4: ios=3611/4096, merge=0/0, ticks=342/336, in_queue=678, util=89.52% 00:14:16.359 23:40:04 nvmf_rdma.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:14:16.359 [global] 00:14:16.359 thread=1 00:14:16.359 invalidate=1 00:14:16.359 rw=write 00:14:16.359 time_based=1 00:14:16.359 runtime=1 00:14:16.359 ioengine=libaio 00:14:16.359 direct=1 00:14:16.359 bs=4096 00:14:16.359 iodepth=128 00:14:16.359 norandommap=0 00:14:16.359 numjobs=1 00:14:16.359 00:14:16.359 verify_dump=1 00:14:16.359 verify_backlog=512 00:14:16.359 verify_state_save=0 00:14:16.359 do_verify=1 00:14:16.359 verify=crc32c-intel 00:14:16.359 [job0] 00:14:16.359 filename=/dev/nvme0n1 00:14:16.359 [job1] 00:14:16.359 filename=/dev/nvme0n2 00:14:16.359 [job2] 00:14:16.359 filename=/dev/nvme0n3 00:14:16.359 [job3] 00:14:16.359 filename=/dev/nvme0n4 00:14:16.359 Could not set queue depth (nvme0n1) 00:14:16.359 Could not set queue depth (nvme0n2) 00:14:16.359 Could not set queue depth (nvme0n3) 00:14:16.359 Could not set queue depth (nvme0n4) 00:14:16.359 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.359 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.359 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.359 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:16.359 fio-3.35 00:14:16.359 Starting 4 threads 00:14:17.734 00:14:17.734 job0: (groupid=0, jobs=1): err= 0: pid=1436793: Mon Jul 15 23:40:06 2024 00:14:17.734 read: IOPS=4026, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1004msec) 00:14:17.734 slat (nsec): min=1355, max=2658.5k, avg=123636.21, stdev=379476.48 00:14:17.734 clat (usec): min=2590, max=20637, avg=15930.81, stdev=2186.60 00:14:17.734 lat (usec): min=3003, max=20643, avg=16054.45, stdev=2173.61 00:14:17.734 clat percentiles (usec): 00:14:17.734 | 1.00th=[ 9896], 5.00th=[12256], 10.00th=[12518], 20.00th=[14746], 00:14:17.734 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[17171], 00:14:17.734 | 70.00th=[17433], 80.00th=[17695], 90.00th=[17957], 95.00th=[19006], 00:14:17.734 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20579], 99.95th=[20579], 00:14:17.734 | 99.99th=[20579] 00:14:17.734 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:14:17.735 slat (nsec): min=1993, max=2553.9k, avg=118578.20, stdev=360957.91 00:14:17.735 clat (usec): min=9222, max=18353, avg=15273.52, stdev=1800.02 00:14:17.735 lat (usec): min=9228, max=18360, avg=15392.09, stdev=1787.44 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[10945], 5.00th=[11600], 10.00th=[13435], 20.00th=[14091], 00:14:17.735 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[15664], 00:14:17.735 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17695], 00:14:17.735 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18220], 99.95th=[18220], 00:14:17.735 | 99.99th=[18482] 00:14:17.735 bw ( KiB/s): min=16280, max=16455, per=17.41%, avg=16367.50, stdev=123.74, samples=2 00:14:17.735 iops : min= 4070, max= 4113, avg=4091.50, stdev=30.41, samples=2 00:14:17.735 lat (msec) : 4=0.05%, 10=0.59%, 20=99.14%, 50=0.22% 00:14:17.735 cpu : usr=1.99%, sys=3.99%, ctx=1575, majf=0, minf=1 00:14:17.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:14:17.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.735 issued rwts: total=4043,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.735 job1: (groupid=0, jobs=1): err= 0: pid=1436794: Mon Jul 15 23:40:06 2024 00:14:17.735 read: IOPS=6739, BW=26.3MiB/s (27.6MB/s)(26.4MiB/1003msec) 00:14:17.735 slat (nsec): min=1249, max=1987.7k, avg=70912.81, stdev=269896.24 00:14:17.735 clat (usec): min=1553, max=17713, avg=9185.16, stdev=4356.99 00:14:17.735 lat (usec): min=2983, max=17721, avg=9256.07, stdev=4386.27 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5997], 20.00th=[ 6521], 00:14:17.735 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6849], 60.00th=[ 6980], 00:14:17.735 | 70.00th=[ 7635], 80.00th=[15926], 90.00th=[17171], 95.00th=[17433], 00:14:17.735 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17695], 99.95th=[17695], 00:14:17.735 | 99.99th=[17695] 00:14:17.735 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:14:17.735 slat (nsec): min=1748, max=1977.1k, avg=69506.22, stdev=264761.86 00:14:17.735 clat (usec): min=4386, max=19135, avg=9055.27, stdev=4670.30 00:14:17.735 lat (usec): min=4675, max=19143, avg=9124.78, stdev=4702.61 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 4883], 5.00th=[ 5080], 10.00th=[ 5473], 20.00th=[ 6194], 00:14:17.735 | 30.00th=[ 6325], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6587], 00:14:17.735 | 70.00th=[ 7242], 80.00th=[16909], 90.00th=[17695], 95.00th=[17695], 00:14:17.735 | 99.00th=[17957], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:14:17.735 | 99.99th=[19006] 00:14:17.735 bw ( KiB/s): min=16200, max=40960, per=30.41%, avg=28580.00, stdev=17507.96, samples=2 00:14:17.735 iops : min= 4050, max=10240, avg=7145.00, stdev=4376.99, samples=2 00:14:17.735 lat (msec) : 2=0.01%, 4=0.10%, 10=72.13%, 20=27.76% 00:14:17.735 cpu : usr=3.09%, sys=4.79%, ctx=1547, majf=0, minf=1 00:14:17.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:17.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.735 issued rwts: total=6760,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.735 job2: (groupid=0, jobs=1): err= 0: pid=1436795: Mon Jul 15 23:40:06 2024 00:14:17.735 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:14:17.735 slat (nsec): min=1343, max=4831.7k, avg=89332.79, stdev=335020.31 00:14:17.735 clat (usec): min=6800, max=20082, avg=11734.31, stdev=4332.76 00:14:17.735 lat (usec): min=6802, max=20088, avg=11823.64, stdev=4354.69 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 7242], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8225], 00:14:17.735 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[11994], 00:14:17.735 | 70.00th=[16450], 80.00th=[17433], 90.00th=[17695], 95.00th=[17957], 00:14:17.735 | 99.00th=[19530], 99.50th=[19530], 99.90th=[20055], 99.95th=[20055], 00:14:17.735 | 99.99th=[20055] 00:14:17.735 write: IOPS=5655, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1003msec); 0 zone resets 00:14:17.735 slat (nsec): min=1906, max=3904.8k, avg=84050.93, stdev=311041.47 00:14:17.735 clat (usec): min=2602, max=18643, avg=10760.61, stdev=4193.41 00:14:17.735 lat (usec): min=3016, max=18648, avg=10844.66, stdev=4217.71 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 5932], 5.00th=[ 7177], 10.00th=[ 7635], 20.00th=[ 7832], 00:14:17.735 | 30.00th=[ 7898], 40.00th=[ 7963], 50.00th=[ 8029], 60.00th=[ 8225], 00:14:17.735 | 70.00th=[12780], 80.00th=[16909], 90.00th=[17171], 95.00th=[17433], 00:14:17.735 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:14:17.735 | 99.99th=[18744] 00:14:17.735 bw ( KiB/s): min=16072, max=28984, per=23.97%, avg=22528.00, stdev=9130.16, samples=2 00:14:17.735 iops : min= 4018, max= 7246, avg=5632.00, stdev=2282.54, samples=2 00:14:17.735 lat (msec) : 4=0.24%, 10=60.68%, 20=38.92%, 50=0.17% 00:14:17.735 cpu : usr=3.49%, sys=3.09%, ctx=1119, majf=0, minf=1 00:14:17.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:14:17.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.735 issued rwts: total=5632,5672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.735 job3: (groupid=0, jobs=1): err= 0: pid=1436796: Mon Jul 15 23:40:06 2024 00:14:17.735 read: IOPS=6484, BW=25.3MiB/s (26.6MB/s)(25.4MiB/1001msec) 00:14:17.735 slat (nsec): min=1355, max=1732.6k, avg=76680.60, stdev=235830.33 00:14:17.735 clat (usec): min=350, max=19679, avg=9881.27, stdev=4396.90 00:14:17.735 lat (usec): min=1497, max=20090, avg=9957.95, stdev=4428.89 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 3621], 5.00th=[ 6128], 10.00th=[ 6325], 20.00th=[ 6521], 00:14:17.735 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6915], 60.00th=[ 7373], 00:14:17.735 | 70.00th=[14353], 80.00th=[15270], 90.00th=[15533], 95.00th=[18220], 00:14:17.735 | 99.00th=[19268], 99.50th=[19530], 99.90th=[19530], 99.95th=[19530], 00:14:17.735 | 99.99th=[19792] 00:14:17.735 write: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec); 0 zone resets 00:14:17.735 slat (nsec): min=1897, max=1825.2k, avg=71574.68, stdev=219910.65 00:14:17.735 clat (usec): min=5335, max=18187, avg=9374.49, stdev=3930.19 00:14:17.735 lat (usec): min=5343, max=18200, avg=9446.06, stdev=3959.96 00:14:17.735 clat percentiles (usec): 00:14:17.735 | 1.00th=[ 5604], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6259], 00:14:17.735 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6980], 00:14:17.735 | 70.00th=[13698], 80.00th=[14484], 90.00th=[14746], 95.00th=[15139], 00:14:17.735 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18220], 99.95th=[18220], 00:14:17.735 | 99.99th=[18220] 00:14:17.735 bw ( KiB/s): min=16384, max=16384, per=17.43%, avg=16384.00, stdev= 0.00, samples=1 00:14:17.735 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:14:17.735 lat (usec) : 500=0.01% 00:14:17.735 lat (msec) : 2=0.24%, 4=0.26%, 10=61.44%, 20=38.05% 00:14:17.735 cpu : usr=3.30%, sys=6.60%, ctx=1313, majf=0, minf=1 00:14:17.735 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:14:17.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:17.735 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:17.735 issued rwts: total=6491,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:17.735 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:17.735 00:14:17.735 Run status group 0 (all jobs): 00:14:17.735 READ: bw=89.2MiB/s (93.5MB/s), 15.7MiB/s-26.3MiB/s (16.5MB/s-27.6MB/s), io=89.6MiB (93.9MB), run=1001-1004msec 00:14:17.735 WRITE: bw=91.8MiB/s (96.2MB/s), 15.9MiB/s-27.9MiB/s (16.7MB/s-29.3MB/s), io=92.2MiB (96.6MB), run=1001-1004msec 00:14:17.735 00:14:17.735 Disk stats (read/write): 00:14:17.735 nvme0n1: ios=3576/3584, merge=0/0, ticks=16021/15713, in_queue=31734, util=87.07% 00:14:17.735 nvme0n2: ios=6248/6656, merge=0/0, ticks=13287/13683, in_queue=26970, util=87.65% 00:14:17.735 nvme0n3: ios=5120/5133, merge=0/0, ticks=20749/19871, in_queue=40620, util=89.26% 00:14:17.735 nvme0n4: ios=5120/5321, merge=0/0, ticks=15724/15626, in_queue=31350, util=89.71% 00:14:17.735 23:40:06 nvmf_rdma.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:14:17.735 [global] 00:14:17.735 thread=1 00:14:17.735 invalidate=1 00:14:17.735 rw=randwrite 00:14:17.735 time_based=1 00:14:17.735 runtime=1 00:14:17.735 ioengine=libaio 00:14:17.735 direct=1 00:14:17.735 bs=4096 00:14:17.735 iodepth=128 00:14:17.735 norandommap=0 00:14:17.735 numjobs=1 00:14:17.735 00:14:17.735 verify_dump=1 00:14:17.735 verify_backlog=512 00:14:17.735 verify_state_save=0 00:14:17.735 do_verify=1 00:14:17.735 verify=crc32c-intel 00:14:17.735 [job0] 00:14:17.735 filename=/dev/nvme0n1 00:14:17.735 [job1] 00:14:17.735 filename=/dev/nvme0n2 00:14:17.735 [job2] 00:14:17.735 filename=/dev/nvme0n3 00:14:17.735 [job3] 00:14:17.735 filename=/dev/nvme0n4 00:14:17.735 Could not set queue depth (nvme0n1) 00:14:17.735 Could not set queue depth (nvme0n2) 00:14:17.735 Could not set queue depth (nvme0n3) 00:14:17.735 Could not set queue depth (nvme0n4) 00:14:17.994 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.994 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.994 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.994 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:17.994 fio-3.35 00:14:17.994 Starting 4 threads 00:14:19.388 00:14:19.388 job0: (groupid=0, jobs=1): err= 0: pid=1437174: Mon Jul 15 23:40:08 2024 00:14:19.388 read: IOPS=2310, BW=9242KiB/s (9464kB/s)(9316KiB/1008msec) 00:14:19.388 slat (nsec): min=1438, max=7161.5k, avg=214692.35, stdev=809507.52 00:14:19.388 clat (usec): min=4672, max=33073, avg=27669.74, stdev=3094.15 00:14:19.388 lat (usec): min=8234, max=33349, avg=27884.43, stdev=3007.47 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[10814], 5.00th=[22938], 10.00th=[27132], 20.00th=[27919], 00:14:19.388 | 30.00th=[28181], 40.00th=[28181], 50.00th=[28443], 60.00th=[28443], 00:14:19.388 | 70.00th=[28705], 80.00th=[28967], 90.00th=[28967], 95.00th=[29230], 00:14:19.388 | 99.00th=[31065], 99.50th=[32637], 99.90th=[33162], 99.95th=[33162], 00:14:19.388 | 99.99th=[33162] 00:14:19.388 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:14:19.388 slat (nsec): min=1902, max=5662.5k, avg=189905.23, stdev=701532.24 00:14:19.388 clat (usec): min=415, max=32133, avg=24494.78, stdev=7693.11 00:14:19.388 lat (usec): min=436, max=32136, avg=24684.68, stdev=7725.41 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[ 1369], 5.00th=[ 6849], 10.00th=[10290], 20.00th=[23200], 00:14:19.388 | 30.00th=[27657], 40.00th=[27919], 50.00th=[28181], 60.00th=[28181], 00:14:19.388 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28705], 95.00th=[28967], 00:14:19.388 | 99.00th=[29230], 99.50th=[29230], 99.90th=[29492], 99.95th=[29492], 00:14:19.388 | 99.99th=[32113] 00:14:19.388 bw ( KiB/s): min= 8480, max=12000, per=15.64%, avg=10240.00, stdev=2489.02, samples=2 00:14:19.388 iops : min= 2120, max= 3000, avg=2560.00, stdev=622.25, samples=2 00:14:19.388 lat (usec) : 500=0.06% 00:14:19.388 lat (msec) : 2=0.70%, 4=1.27%, 10=2.97%, 20=7.22%, 50=87.79% 00:14:19.388 cpu : usr=1.69%, sys=2.88%, ctx=1413, majf=0, minf=1 00:14:19.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:14:19.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.388 issued rwts: total=2329,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.388 job1: (groupid=0, jobs=1): err= 0: pid=1437182: Mon Jul 15 23:40:08 2024 00:14:19.388 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:14:19.388 slat (nsec): min=1430, max=5332.1k, avg=220601.86, stdev=764217.72 00:14:19.388 clat (usec): min=23163, max=32734, avg=28330.84, stdev=817.54 00:14:19.388 lat (usec): min=27375, max=33107, avg=28551.45, stdev=474.52 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[23987], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:14:19.388 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28443], 00:14:19.388 | 70.00th=[28705], 80.00th=[28705], 90.00th=[28967], 95.00th=[28967], 00:14:19.388 | 99.00th=[29230], 99.50th=[29492], 99.90th=[32375], 99.95th=[32637], 00:14:19.388 | 99.99th=[32637] 00:14:19.388 write: IOPS=2440, BW=9764KiB/s (9998kB/s)(9832KiB/1007msec); 0 zone resets 00:14:19.388 slat (usec): min=2, max=5225, avg=219.53, stdev=735.77 00:14:19.388 clat (usec): min=5018, max=33340, avg=27972.63, stdev=2735.15 00:14:19.388 lat (usec): min=8308, max=34422, avg=28192.16, stdev=2656.95 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[11207], 5.00th=[24773], 10.00th=[27395], 20.00th=[27657], 00:14:19.388 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28181], 60.00th=[28443], 00:14:19.388 | 70.00th=[28705], 80.00th=[28967], 90.00th=[29492], 95.00th=[31065], 00:14:19.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33162], 99.95th=[33162], 00:14:19.388 | 99.99th=[33424] 00:14:19.388 bw ( KiB/s): min= 8928, max= 9712, per=14.24%, avg=9320.00, stdev=554.37, samples=2 00:14:19.388 iops : min= 2232, max= 2428, avg=2330.00, stdev=138.59, samples=2 00:14:19.388 lat (msec) : 10=0.24%, 20=1.20%, 50=98.56% 00:14:19.388 cpu : usr=1.69%, sys=2.58%, ctx=1462, majf=0, minf=1 00:14:19.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:14:19.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.388 issued rwts: total=2048,2458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.388 job2: (groupid=0, jobs=1): err= 0: pid=1437200: Mon Jul 15 23:40:08 2024 00:14:19.388 read: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec) 00:14:19.388 slat (nsec): min=1618, max=4925.4k, avg=221186.94, stdev=774708.66 00:14:19.388 clat (usec): min=23231, max=29454, avg=28368.33, stdev=835.83 00:14:19.388 lat (usec): min=27014, max=33188, avg=28589.52, stdev=449.01 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[23987], 5.00th=[27657], 10.00th=[27919], 20.00th=[28181], 00:14:19.388 | 30.00th=[28181], 40.00th=[28443], 50.00th=[28443], 60.00th=[28705], 00:14:19.388 | 70.00th=[28705], 80.00th=[28967], 90.00th=[28967], 95.00th=[29230], 00:14:19.388 | 99.00th=[29492], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:14:19.388 | 99.99th=[29492] 00:14:19.388 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(9.93MiB/1007msec); 0 zone resets 00:14:19.388 slat (nsec): min=1982, max=6978.1k, avg=211594.26, stdev=765695.19 00:14:19.388 clat (usec): min=4622, max=33397, avg=27320.23, stdev=4024.44 00:14:19.388 lat (usec): min=4631, max=33403, avg=27531.82, stdev=3985.18 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[ 9241], 5.00th=[17433], 10.00th=[25035], 20.00th=[27657], 00:14:19.388 | 30.00th=[27919], 40.00th=[27919], 50.00th=[28181], 60.00th=[28443], 00:14:19.388 | 70.00th=[28443], 80.00th=[28705], 90.00th=[28967], 95.00th=[30802], 00:14:19.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33424], 99.95th=[33424], 00:14:19.388 | 99.99th=[33424] 00:14:19.388 bw ( KiB/s): min= 9600, max= 9712, per=14.75%, avg=9656.00, stdev=79.20, samples=2 00:14:19.388 iops : min= 2400, max= 2428, avg=2414.00, stdev=19.80, samples=2 00:14:19.388 lat (msec) : 10=0.72%, 20=2.46%, 50=96.82% 00:14:19.388 cpu : usr=1.09%, sys=3.58%, ctx=1287, majf=0, minf=1 00:14:19.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:14:19.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.388 issued rwts: total=2048,2542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.388 job3: (groupid=0, jobs=1): err= 0: pid=1437209: Mon Jul 15 23:40:08 2024 00:14:19.388 read: IOPS=8643, BW=33.8MiB/s (35.4MB/s)(34.0MiB/1007msec) 00:14:19.388 slat (nsec): min=1482, max=3674.0k, avg=55808.67, stdev=287772.78 00:14:19.388 clat (usec): min=6519, max=11590, avg=7333.26, stdev=452.87 00:14:19.388 lat (usec): min=6524, max=11600, avg=7389.06, stdev=519.53 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[ 6718], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7177], 00:14:19.388 | 30.00th=[ 7242], 40.00th=[ 7242], 50.00th=[ 7308], 60.00th=[ 7308], 00:14:19.388 | 70.00th=[ 7373], 80.00th=[ 7373], 90.00th=[ 7439], 95.00th=[ 7570], 00:14:19.388 | 99.00th=[ 9896], 99.50th=[10421], 99.90th=[10683], 99.95th=[10814], 00:14:19.388 | 99.99th=[11600] 00:14:19.388 write: IOPS=8872, BW=34.7MiB/s (36.3MB/s)(34.9MiB/1007msec); 0 zone resets 00:14:19.388 slat (nsec): min=1959, max=3607.4k, avg=53942.43, stdev=274327.75 00:14:19.388 clat (usec): min=2309, max=13372, avg=7153.38, stdev=654.09 00:14:19.388 lat (usec): min=2317, max=13674, avg=7207.32, stdev=699.20 00:14:19.388 clat percentiles (usec): 00:14:19.388 | 1.00th=[ 6390], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 6980], 00:14:19.388 | 30.00th=[ 7046], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7111], 00:14:19.388 | 70.00th=[ 7111], 80.00th=[ 7177], 90.00th=[ 7308], 95.00th=[ 7635], 00:14:19.388 | 99.00th=[10159], 99.50th=[11863], 99.90th=[13304], 99.95th=[13304], 00:14:19.388 | 99.99th=[13435] 00:14:19.388 bw ( KiB/s): min=33864, max=36600, per=53.83%, avg=35232.00, stdev=1934.64, samples=2 00:14:19.388 iops : min= 8466, max= 9150, avg=8808.00, stdev=483.66, samples=2 00:14:19.388 lat (msec) : 4=0.12%, 10=98.76%, 20=1.11% 00:14:19.388 cpu : usr=4.57%, sys=7.55%, ctx=1217, majf=0, minf=1 00:14:19.388 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:14:19.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:19.388 issued rwts: total=8704,8935,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.388 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:19.388 00:14:19.388 Run status group 0 (all jobs): 00:14:19.388 READ: bw=58.6MiB/s (61.5MB/s), 8135KiB/s-33.8MiB/s (8330kB/s-35.4MB/s), io=59.1MiB (62.0MB), run=1007-1008msec 00:14:19.388 WRITE: bw=63.9MiB/s (67.0MB/s), 9764KiB/s-34.7MiB/s (9998kB/s-36.3MB/s), io=64.4MiB (67.6MB), run=1007-1008msec 00:14:19.388 00:14:19.388 Disk stats (read/write): 00:14:19.388 nvme0n1: ios=2097/2063, merge=0/0, ticks=14407/13188, in_queue=27595, util=84.07% 00:14:19.388 nvme0n2: ios=1682/2048, merge=0/0, ticks=11977/14440, in_queue=26417, util=84.81% 00:14:19.388 nvme0n3: ios=1764/2048, merge=0/0, ticks=12592/13911, in_queue=26503, util=87.72% 00:14:19.388 nvme0n4: ios=7168/7403, merge=0/0, ticks=51692/51689, in_queue=103381, util=89.50% 00:14:19.388 23:40:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:14:19.388 23:40:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1437393 00:14:19.388 23:40:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:14:19.388 23:40:08 nvmf_rdma.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:14:19.388 [global] 00:14:19.388 thread=1 00:14:19.388 invalidate=1 00:14:19.388 rw=read 00:14:19.388 time_based=1 00:14:19.388 runtime=10 00:14:19.388 ioengine=libaio 00:14:19.388 direct=1 00:14:19.388 bs=4096 00:14:19.388 iodepth=1 00:14:19.388 norandommap=1 00:14:19.388 numjobs=1 00:14:19.388 00:14:19.388 [job0] 00:14:19.388 filename=/dev/nvme0n1 00:14:19.388 [job1] 00:14:19.388 filename=/dev/nvme0n2 00:14:19.388 [job2] 00:14:19.388 filename=/dev/nvme0n3 00:14:19.388 [job3] 00:14:19.388 filename=/dev/nvme0n4 00:14:19.388 Could not set queue depth (nvme0n1) 00:14:19.388 Could not set queue depth (nvme0n2) 00:14:19.388 Could not set queue depth (nvme0n3) 00:14:19.388 Could not set queue depth (nvme0n4) 00:14:19.656 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.656 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.656 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.656 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:19.656 fio-3.35 00:14:19.656 Starting 4 threads 00:14:22.181 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:14:22.460 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=74211328, buflen=4096 00:14:22.460 fio: pid=1437666, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:22.460 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:14:22.719 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=86810624, buflen=4096 00:14:22.719 fio: pid=1437661, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:22.719 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.719 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:14:22.719 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=21454848, buflen=4096 00:14:22.719 fio: pid=1437642, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:22.719 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.719 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:14:22.978 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=62894080, buflen=4096 00:14:22.978 fio: pid=1437652, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:14:22.978 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:22.978 23:40:11 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:14:22.978 00:14:22.978 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437642: Mon Jul 15 23:40:11 2024 00:14:22.978 read: IOPS=6968, BW=27.2MiB/s (28.5MB/s)(84.5MiB/3103msec) 00:14:22.978 slat (usec): min=5, max=16668, avg= 9.21, stdev=155.16 00:14:22.978 clat (usec): min=48, max=20972, avg=131.86, stdev=202.47 00:14:22.978 lat (usec): min=55, max=20978, avg=141.08, stdev=254.83 00:14:22.978 clat percentiles (usec): 00:14:22.978 | 1.00th=[ 58], 5.00th=[ 75], 10.00th=[ 79], 20.00th=[ 106], 00:14:22.978 | 30.00th=[ 122], 40.00th=[ 127], 50.00th=[ 131], 60.00th=[ 137], 00:14:22.978 | 70.00th=[ 151], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 174], 00:14:22.978 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 227], 99.95th=[ 253], 00:14:22.978 | 99.99th=[ 408] 00:14:22.978 bw ( KiB/s): min=23712, max=32240, per=24.57%, avg=27574.40, stdev=3283.11, samples=5 00:14:22.978 iops : min= 5928, max= 8060, avg=6893.60, stdev=820.78, samples=5 00:14:22.978 lat (usec) : 50=0.06%, 100=19.18%, 250=80.71%, 500=0.04% 00:14:22.978 lat (msec) : 50=0.01% 00:14:22.978 cpu : usr=2.13%, sys=7.32%, ctx=21628, majf=0, minf=1 00:14:22.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.978 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.978 issued rwts: total=21623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.978 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437652: Mon Jul 15 23:40:11 2024 00:14:22.978 read: IOPS=9609, BW=37.5MiB/s (39.4MB/s)(124MiB/3303msec) 00:14:22.978 slat (usec): min=6, max=11993, avg= 8.84, stdev=142.74 00:14:22.978 clat (usec): min=46, max=21003, avg=93.95, stdev=121.40 00:14:22.978 lat (usec): min=55, max=21009, avg=102.79, stdev=187.23 00:14:22.978 clat percentiles (usec): 00:14:22.978 | 1.00th=[ 53], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 73], 00:14:22.978 | 30.00th=[ 75], 40.00th=[ 77], 50.00th=[ 80], 60.00th=[ 83], 00:14:22.978 | 70.00th=[ 117], 80.00th=[ 126], 90.00th=[ 137], 95.00th=[ 157], 00:14:22.978 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 208], 99.95th=[ 215], 00:14:22.978 | 99.99th=[ 281] 00:14:22.978 bw ( KiB/s): min=26840, max=47544, per=33.26%, avg=37323.67, stdev=9374.67, samples=6 00:14:22.978 iops : min= 6710, max=11886, avg=9330.83, stdev=2343.61, samples=6 00:14:22.978 lat (usec) : 50=0.08%, 100=67.82%, 250=32.08%, 500=0.01% 00:14:22.978 lat (msec) : 50=0.01% 00:14:22.978 cpu : usr=3.36%, sys=10.36%, ctx=31747, majf=0, minf=1 00:14:22.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.978 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.978 issued rwts: total=31740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.978 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437661: Mon Jul 15 23:40:11 2024 00:14:22.978 read: IOPS=7273, BW=28.4MiB/s (29.8MB/s)(82.8MiB/2914msec) 00:14:22.978 slat (usec): min=5, max=12884, avg= 8.60, stdev=103.15 00:14:22.978 clat (usec): min=66, max=20852, avg=126.51, stdev=145.73 00:14:22.978 lat (usec): min=72, max=20859, avg=135.12, stdev=178.64 00:14:22.978 clat percentiles (usec): 00:14:22.978 | 1.00th=[ 78], 5.00th=[ 82], 10.00th=[ 85], 20.00th=[ 90], 00:14:22.978 | 30.00th=[ 110], 40.00th=[ 122], 50.00th=[ 126], 60.00th=[ 130], 00:14:22.978 | 70.00th=[ 141], 80.00th=[ 157], 90.00th=[ 163], 95.00th=[ 172], 00:14:22.978 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 237], 00:14:22.978 | 99.99th=[ 383] 00:14:22.978 bw ( KiB/s): min=23400, max=36312, per=26.58%, avg=29828.80, stdev=4627.13, samples=5 00:14:22.978 iops : min= 5850, max= 9078, avg=7457.20, stdev=1156.78, samples=5 00:14:22.979 lat (usec) : 100=27.98%, 250=71.97%, 500=0.03% 00:14:22.979 lat (msec) : 50=0.01% 00:14:22.979 cpu : usr=2.44%, sys=7.86%, ctx=21197, majf=0, minf=1 00:14:22.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.979 issued rwts: total=21195,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.979 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1437666: Mon Jul 15 23:40:11 2024 00:14:22.979 read: IOPS=6685, BW=26.1MiB/s (27.4MB/s)(70.8MiB/2710msec) 00:14:22.979 slat (nsec): min=6173, max=66339, avg=7882.91, stdev=2060.41 00:14:22.979 clat (usec): min=69, max=361, avg=139.15, stdev=25.32 00:14:22.979 lat (usec): min=76, max=406, avg=147.04, stdev=25.36 00:14:22.979 clat percentiles (usec): 00:14:22.979 | 1.00th=[ 85], 5.00th=[ 94], 10.00th=[ 115], 20.00th=[ 123], 00:14:22.979 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 147], 00:14:22.979 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 180], 00:14:22.979 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 225], 99.95th=[ 233], 00:14:22.979 | 99.99th=[ 343] 00:14:22.979 bw ( KiB/s): min=23600, max=30472, per=24.28%, avg=27254.40, stdev=2547.59, samples=5 00:14:22.979 iops : min= 5900, max= 7618, avg=6813.60, stdev=636.90, samples=5 00:14:22.979 lat (usec) : 100=7.62%, 250=92.35%, 500=0.02% 00:14:22.979 cpu : usr=2.21%, sys=8.01%, ctx=18121, majf=0, minf=2 00:14:22.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:22.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.979 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:22.979 issued rwts: total=18119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:22.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:22.979 00:14:22.979 Run status group 0 (all jobs): 00:14:22.979 READ: bw=110MiB/s (115MB/s), 26.1MiB/s-37.5MiB/s (27.4MB/s-39.4MB/s), io=362MiB (380MB), run=2710-3303msec 00:14:22.979 00:14:22.979 Disk stats (read/write): 00:14:22.979 nvme0n1: ios=19537/0, merge=0/0, ticks=2536/0, in_queue=2536, util=94.86% 00:14:22.979 nvme0n2: ios=29161/0, merge=0/0, ticks=2678/0, in_queue=2678, util=94.87% 00:14:22.979 nvme0n3: ios=20947/0, merge=0/0, ticks=2516/0, in_queue=2516, util=95.92% 00:14:22.979 nvme0n4: ios=17679/0, merge=0/0, ticks=2333/0, in_queue=2333, util=96.46% 00:14:23.238 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:23.238 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:14:23.497 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:23.497 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:14:23.497 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:23.497 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:14:23.755 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:14:23.755 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:14:24.014 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:14:24.014 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # wait 1437393 00:14:24.014 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:14:24.014 23:40:12 nvmf_rdma.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:24.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1213 -- # local i=0 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1214 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1221 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1225 -- # return 0 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:14:24.950 nvmf hotplug test: fio failed as expected 00:14:24.950 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.208 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:14:25.208 23:40:13 nvmf_rdma.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:25.208 rmmod nvme_rdma 00:14:25.208 rmmod nvme_fabrics 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1434678 ']' 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1434678 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@942 -- # '[' -z 1434678 ']' 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@946 -- # kill -0 1434678 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@947 -- # uname 00:14:25.208 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1434678 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1434678' 00:14:25.209 killing process with pid 1434678 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@961 -- # kill 1434678 00:14:25.209 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@966 -- # wait 1434678 00:14:25.466 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.466 23:40:14 nvmf_rdma.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:25.466 00:14:25.466 real 0m24.978s 00:14:25.466 user 1m51.628s 00:14:25.466 sys 0m8.089s 00:14:25.466 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:25.466 23:40:14 nvmf_rdma.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.466 ************************************ 00:14:25.466 END TEST nvmf_fio_target 00:14:25.466 ************************************ 00:14:25.466 23:40:14 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:14:25.466 23:40:14 nvmf_rdma -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:25.466 23:40:14 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:25.466 23:40:14 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:25.466 23:40:14 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:25.466 ************************************ 00:14:25.466 START TEST nvmf_bdevio 00:14:25.466 ************************************ 00:14:25.466 23:40:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:14:25.725 * Looking for test storage... 00:14:25.725 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:14:25.725 23:40:14 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:30.985 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:30.985 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:30.986 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:30.986 Found net devices under 0000:da:00.0: mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:30.986 Found net devices under 0000:da:00.1: mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:30.986 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:30.986 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:30.986 altname enp218s0f0np0 00:14:30.986 altname ens818f0np0 00:14:30.986 inet 192.168.100.8/24 scope global mlx_0_0 00:14:30.986 valid_lft forever preferred_lft forever 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:30.986 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:30.986 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:30.986 altname enp218s0f1np1 00:14:30.986 altname ens818f1np1 00:14:30.986 inet 192.168.100.9/24 scope global mlx_0_1 00:14:30.986 valid_lft forever preferred_lft forever 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:30.986 192.168.100.9' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:30.986 192.168.100.9' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:30.986 192.168.100.9' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:30.986 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1441758 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1441758 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@823 -- # '[' -z 1441758 ']' 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:30.987 23:40:19 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:14:30.987 [2024-07-15 23:40:19.835302] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:30.987 [2024-07-15 23:40:19.835349] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.987 [2024-07-15 23:40:19.884575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.987 [2024-07-15 23:40:19.965574] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.987 [2024-07-15 23:40:19.965606] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.987 [2024-07-15 23:40:19.965613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.987 [2024-07-15 23:40:19.965619] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.987 [2024-07-15 23:40:19.965624] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.987 [2024-07-15 23:40:19.965677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:30.987 [2024-07-15 23:40:19.965781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:14:30.987 [2024-07-15 23:40:19.965908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.987 [2024-07-15 23:40:19.965910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@856 -- # return 0 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 [2024-07-15 23:40:20.689648] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8c73d0/0x8cb8c0) succeed. 00:14:31.919 [2024-07-15 23:40:20.698808] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8c89c0/0x90cf50) succeed. 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 Malloc0 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:31.919 [2024-07-15 23:40:20.864345] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:31.919 { 00:14:31.919 "params": { 00:14:31.919 "name": "Nvme$subsystem", 00:14:31.919 "trtype": "$TEST_TRANSPORT", 00:14:31.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:31.919 "adrfam": "ipv4", 00:14:31.919 "trsvcid": "$NVMF_PORT", 00:14:31.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:31.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:31.919 "hdgst": ${hdgst:-false}, 00:14:31.919 "ddgst": ${ddgst:-false} 00:14:31.919 }, 00:14:31.919 "method": "bdev_nvme_attach_controller" 00:14:31.919 } 00:14:31.919 EOF 00:14:31.919 )") 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:14:31.919 23:40:20 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:31.919 "params": { 00:14:31.919 "name": "Nvme1", 00:14:31.919 "trtype": "rdma", 00:14:31.919 "traddr": "192.168.100.8", 00:14:31.919 "adrfam": "ipv4", 00:14:31.919 "trsvcid": "4420", 00:14:31.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:31.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:31.919 "hdgst": false, 00:14:31.919 "ddgst": false 00:14:31.919 }, 00:14:31.919 "method": "bdev_nvme_attach_controller" 00:14:31.919 }' 00:14:32.176 [2024-07-15 23:40:20.913157] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:14:32.176 [2024-07-15 23:40:20.913203] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1441896 ] 00:14:32.176 [2024-07-15 23:40:20.971141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.176 [2024-07-15 23:40:21.047586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.176 [2024-07-15 23:40:21.047683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.176 [2024-07-15 23:40:21.047685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.435 I/O targets: 00:14:32.435 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:32.435 00:14:32.435 00:14:32.435 CUnit - A unit testing framework for C - Version 2.1-3 00:14:32.435 http://cunit.sourceforge.net/ 00:14:32.435 00:14:32.435 00:14:32.435 Suite: bdevio tests on: Nvme1n1 00:14:32.435 Test: blockdev write read block ...passed 00:14:32.435 Test: blockdev write zeroes read block ...passed 00:14:32.435 Test: blockdev write zeroes read no split ...passed 00:14:32.435 Test: blockdev write zeroes read split ...passed 00:14:32.435 Test: blockdev write zeroes read split partial ...passed 00:14:32.435 Test: blockdev reset ...[2024-07-15 23:40:21.249809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:32.435 [2024-07-15 23:40:21.272324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:14:32.435 [2024-07-15 23:40:21.299050] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:32.435 passed 00:14:32.435 Test: blockdev write read 8 blocks ...passed 00:14:32.435 Test: blockdev write read size > 128k ...passed 00:14:32.435 Test: blockdev write read invalid size ...passed 00:14:32.435 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:32.435 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:32.435 Test: blockdev write read max offset ...passed 00:14:32.435 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:32.435 Test: blockdev writev readv 8 blocks ...passed 00:14:32.435 Test: blockdev writev readv 30 x 1block ...passed 00:14:32.435 Test: blockdev writev readv block ...passed 00:14:32.435 Test: blockdev writev readv size > 128k ...passed 00:14:32.435 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:32.435 Test: blockdev comparev and writev ...[2024-07-15 23:40:21.302005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:32.435 [2024-07-15 23:40:21.302608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:32.435 [2024-07-15 23:40:21.302616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:32.435 passed 00:14:32.435 Test: blockdev nvme passthru rw ...passed 00:14:32.435 Test: blockdev nvme passthru vendor specific ...[2024-07-15 23:40:21.302875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.435 [2024-07-15 23:40:21.302886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:32.436 [2024-07-15 23:40:21.302924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.436 [2024-07-15 23:40:21.302933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:32.436 [2024-07-15 23:40:21.302973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.436 [2024-07-15 23:40:21.302981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:32.436 [2024-07-15 23:40:21.303019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:14:32.436 [2024-07-15 23:40:21.303028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:32.436 passed 00:14:32.436 Test: blockdev nvme admin passthru ...passed 00:14:32.436 Test: blockdev copy ...passed 00:14:32.436 00:14:32.436 Run Summary: Type Total Ran Passed Failed Inactive 00:14:32.436 suites 1 1 n/a 0 0 00:14:32.436 tests 23 23 23 0 0 00:14:32.436 asserts 152 152 152 0 n/a 00:14:32.436 00:14:32.436 Elapsed time = 0.173 seconds 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:32.693 rmmod nvme_rdma 00:14:32.693 rmmod nvme_fabrics 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1441758 ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1441758 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@942 -- # '[' -z 1441758 ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@946 -- # kill -0 1441758 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@947 -- # uname 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1441758 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@948 -- # process_name=reactor_3 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' reactor_3 = sudo ']' 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1441758' 00:14:32.693 killing process with pid 1441758 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@961 -- # kill 1441758 00:14:32.693 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@966 -- # wait 1441758 00:14:32.952 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:32.952 23:40:21 nvmf_rdma.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:32.952 00:14:32.952 real 0m7.437s 00:14:32.952 user 0m9.989s 00:14:32.952 sys 0m4.498s 00:14:32.952 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@1118 -- # xtrace_disable 00:14:32.952 23:40:21 nvmf_rdma.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:14:32.952 ************************************ 00:14:32.952 END TEST nvmf_bdevio 00:14:32.952 ************************************ 00:14:32.952 23:40:21 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:14:32.952 23:40:21 nvmf_rdma -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:32.952 23:40:21 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:14:32.952 23:40:21 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:14:32.952 23:40:21 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:14:32.952 ************************************ 00:14:32.952 START TEST nvmf_auth_target 00:14:32.952 ************************************ 00:14:32.952 23:40:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:33.211 * Looking for test storage... 00:14:33.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.211 23:40:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.472 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:38.473 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:38.473 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:38.473 Found net devices under 0000:da:00.0: mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:38.473 Found net devices under 0000:da:00.1: mlx_0_1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:38.473 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.473 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:14:38.473 altname enp218s0f0np0 00:14:38.473 altname ens818f0np0 00:14:38.473 inet 192.168.100.8/24 scope global mlx_0_0 00:14:38.473 valid_lft forever preferred_lft forever 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:38.473 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:38.473 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:14:38.473 altname enp218s0f1np1 00:14:38.473 altname ens818f1np1 00:14:38.473 inet 192.168.100.9/24 scope global mlx_0_1 00:14:38.473 valid_lft forever preferred_lft forever 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:38.473 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:38.474 192.168.100.9' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:38.474 192.168.100.9' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:38.474 192.168.100.9' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1445076 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1445076 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1445076 ']' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:38.474 23:40:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1445187 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=62f7f90b7dc7b7d7233ea173b490c6eda13b5f4aa432ca4b 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Mzj 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 62f7f90b7dc7b7d7233ea173b490c6eda13b5f4aa432ca4b 0 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 62f7f90b7dc7b7d7233ea173b490c6eda13b5f4aa432ca4b 0 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=62f7f90b7dc7b7d7233ea173b490c6eda13b5f4aa432ca4b 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.041 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Mzj 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Mzj 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.Mzj 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=45db2aecd8727625b18c59ac26d1ca6067558405a04c850f219520b78d84731d 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.BBP 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 45db2aecd8727625b18c59ac26d1ca6067558405a04c850f219520b78d84731d 3 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 45db2aecd8727625b18c59ac26d1ca6067558405a04c850f219520b78d84731d 3 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=45db2aecd8727625b18c59ac26d1ca6067558405a04c850f219520b78d84731d 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.BBP 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.BBP 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.BBP 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=27c56f09d2e88d6512579730d9ae70f2 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.29u 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 27c56f09d2e88d6512579730d9ae70f2 1 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 27c56f09d2e88d6512579730d9ae70f2 1 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=27c56f09d2e88d6512579730d9ae70f2 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:39.042 23:40:27 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.042 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.29u 00:14:39.042 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.29u 00:14:39.042 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.29u 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ce94a0732016ca77a82ead956472ee1bd2f904669c52f20 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qIR 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ce94a0732016ca77a82ead956472ee1bd2f904669c52f20 2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ce94a0732016ca77a82ead956472ee1bd2f904669c52f20 2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ce94a0732016ca77a82ead956472ee1bd2f904669c52f20 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qIR 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qIR 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.qIR 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=87bee79976635b75730984a05cc3d63f27ed9abcd89b460a 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.EDy 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 87bee79976635b75730984a05cc3d63f27ed9abcd89b460a 2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 87bee79976635b75730984a05cc3d63f27ed9abcd89b460a 2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=87bee79976635b75730984a05cc3d63f27ed9abcd89b460a 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.EDy 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.EDy 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.EDy 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6398de95c1d696d6a5f85ade3ffcc877 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rC8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6398de95c1d696d6a5f85ade3ffcc877 1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6398de95c1d696d6a5f85ade3ffcc877 1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6398de95c1d696d6a5f85ade3ffcc877 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rC8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rC8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rC8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b76a73d8d4edb578d9fbfa3c308390708054b3d9c1441c44b65963aac0809a2e 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RD8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b76a73d8d4edb578d9fbfa3c308390708054b3d9c1441c44b65963aac0809a2e 3 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b76a73d8d4edb578d9fbfa3c308390708054b3d9c1441c44b65963aac0809a2e 3 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b76a73d8d4edb578d9fbfa3c308390708054b3d9c1441c44b65963aac0809a2e 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RD8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RD8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.RD8 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1445076 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1445076 ']' 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:39.302 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1445187 /var/tmp/host.sock 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1445187 ']' 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/host.sock 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:39.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:14:39.561 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Mzj 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Mzj 00:14:39.820 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Mzj 00:14:40.079 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.BBP ]] 00:14:40.079 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BBP 00:14:40.080 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.080 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.080 23:40:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.080 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BBP 00:14:40.080 23:40:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.BBP 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.29u 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.29u 00:14:40.080 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.29u 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.qIR ]] 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qIR 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qIR 00:14:40.338 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.qIR 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.EDy 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.EDy 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.EDy 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rC8 ]] 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rC8 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rC8 00:14:40.597 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rC8 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.RD8 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.RD8 00:14:40.855 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.RD8 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:41.114 23:40:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.114 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.372 00:14:41.372 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.372 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.372 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.630 { 00:14:41.630 "cntlid": 1, 00:14:41.630 "qid": 0, 00:14:41.630 "state": "enabled", 00:14:41.630 "thread": "nvmf_tgt_poll_group_000", 00:14:41.630 "listen_address": { 00:14:41.630 "trtype": "RDMA", 00:14:41.630 "adrfam": "IPv4", 00:14:41.630 "traddr": "192.168.100.8", 00:14:41.630 "trsvcid": "4420" 00:14:41.630 }, 00:14:41.630 "peer_address": { 00:14:41.630 "trtype": "RDMA", 00:14:41.630 "adrfam": "IPv4", 00:14:41.630 "traddr": "192.168.100.8", 00:14:41.630 "trsvcid": "37477" 00:14:41.630 }, 00:14:41.630 "auth": { 00:14:41.630 "state": "completed", 00:14:41.630 "digest": "sha256", 00:14:41.630 "dhgroup": "null" 00:14:41.630 } 00:14:41.630 } 00:14:41.630 ]' 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:41.630 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.889 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.889 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.889 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.889 23:40:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:14:42.823 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.823 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:42.823 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.823 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.823 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.824 23:40:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.082 00:14:43.082 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.082 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.082 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:43.340 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.340 { 00:14:43.340 "cntlid": 3, 00:14:43.340 "qid": 0, 00:14:43.340 "state": "enabled", 00:14:43.340 "thread": "nvmf_tgt_poll_group_000", 00:14:43.340 "listen_address": { 00:14:43.340 "trtype": "RDMA", 00:14:43.340 "adrfam": "IPv4", 00:14:43.340 "traddr": "192.168.100.8", 00:14:43.340 "trsvcid": "4420" 00:14:43.340 }, 00:14:43.340 "peer_address": { 00:14:43.340 "trtype": "RDMA", 00:14:43.340 "adrfam": "IPv4", 00:14:43.340 "traddr": "192.168.100.8", 00:14:43.340 "trsvcid": "38014" 00:14:43.340 }, 00:14:43.340 "auth": { 00:14:43.341 "state": "completed", 00:14:43.341 "digest": "sha256", 00:14:43.341 "dhgroup": "null" 00:14:43.341 } 00:14:43.341 } 00:14:43.341 ]' 00:14:43.341 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.341 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.341 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.341 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:43.341 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.600 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.600 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.600 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.600 23:40:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:14:44.167 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:44.425 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.682 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.683 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.683 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.940 { 00:14:44.940 "cntlid": 5, 00:14:44.940 "qid": 0, 00:14:44.940 "state": "enabled", 00:14:44.940 "thread": "nvmf_tgt_poll_group_000", 00:14:44.940 "listen_address": { 00:14:44.940 "trtype": "RDMA", 00:14:44.940 "adrfam": "IPv4", 00:14:44.940 "traddr": "192.168.100.8", 00:14:44.940 "trsvcid": "4420" 00:14:44.940 }, 00:14:44.940 "peer_address": { 00:14:44.940 "trtype": "RDMA", 00:14:44.940 "adrfam": "IPv4", 00:14:44.940 "traddr": "192.168.100.8", 00:14:44.940 "trsvcid": "32823" 00:14:44.940 }, 00:14:44.940 "auth": { 00:14:44.940 "state": "completed", 00:14:44.940 "digest": "sha256", 00:14:44.940 "dhgroup": "null" 00:14:44.940 } 00:14:44.940 } 00:14:44.940 ]' 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.940 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.198 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:45.198 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.198 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.198 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.198 23:40:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.198 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:14:45.765 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:46.023 23:40:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.281 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.281 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:46.538 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.538 { 00:14:46.538 "cntlid": 7, 00:14:46.538 "qid": 0, 00:14:46.538 "state": "enabled", 00:14:46.538 "thread": "nvmf_tgt_poll_group_000", 00:14:46.538 "listen_address": { 00:14:46.538 "trtype": "RDMA", 00:14:46.538 "adrfam": "IPv4", 00:14:46.538 "traddr": "192.168.100.8", 00:14:46.538 "trsvcid": "4420" 00:14:46.538 }, 00:14:46.538 "peer_address": { 00:14:46.538 "trtype": "RDMA", 00:14:46.538 "adrfam": "IPv4", 00:14:46.539 "traddr": "192.168.100.8", 00:14:46.539 "trsvcid": "54086" 00:14:46.539 }, 00:14:46.539 "auth": { 00:14:46.539 "state": "completed", 00:14:46.539 "digest": "sha256", 00:14:46.539 "dhgroup": "null" 00:14:46.539 } 00:14:46.539 } 00:14:46.539 ]' 00:14:46.539 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.539 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.539 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.539 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:46.539 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.796 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.796 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.796 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.796 23:40:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:47.729 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.730 23:40:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:47.730 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.730 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.988 00:14:47.988 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.988 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.988 23:40:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.246 { 00:14:48.246 "cntlid": 9, 00:14:48.246 "qid": 0, 00:14:48.246 "state": "enabled", 00:14:48.246 "thread": "nvmf_tgt_poll_group_000", 00:14:48.246 "listen_address": { 00:14:48.246 "trtype": "RDMA", 00:14:48.246 "adrfam": "IPv4", 00:14:48.246 "traddr": "192.168.100.8", 00:14:48.246 "trsvcid": "4420" 00:14:48.246 }, 00:14:48.246 "peer_address": { 00:14:48.246 "trtype": "RDMA", 00:14:48.246 "adrfam": "IPv4", 00:14:48.246 "traddr": "192.168.100.8", 00:14:48.246 "trsvcid": "54740" 00:14:48.246 }, 00:14:48.246 "auth": { 00:14:48.246 "state": "completed", 00:14:48.246 "digest": "sha256", 00:14:48.246 "dhgroup": "ffdhe2048" 00:14:48.246 } 00:14:48.246 } 00:14:48.246 ]' 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.246 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.504 23:40:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:14:49.071 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.330 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.588 00:14:49.588 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.588 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.588 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.847 { 00:14:49.847 "cntlid": 11, 00:14:49.847 "qid": 0, 00:14:49.847 "state": "enabled", 00:14:49.847 "thread": "nvmf_tgt_poll_group_000", 00:14:49.847 "listen_address": { 00:14:49.847 "trtype": "RDMA", 00:14:49.847 "adrfam": "IPv4", 00:14:49.847 "traddr": "192.168.100.8", 00:14:49.847 "trsvcid": "4420" 00:14:49.847 }, 00:14:49.847 "peer_address": { 00:14:49.847 "trtype": "RDMA", 00:14:49.847 "adrfam": "IPv4", 00:14:49.847 "traddr": "192.168.100.8", 00:14:49.847 "trsvcid": "49619" 00:14:49.847 }, 00:14:49.847 "auth": { 00:14:49.847 "state": "completed", 00:14:49.847 "digest": "sha256", 00:14:49.847 "dhgroup": "ffdhe2048" 00:14:49.847 } 00:14:49.847 } 00:14:49.847 ]' 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:49.847 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.105 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.105 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.105 23:40:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.106 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:14:50.755 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.014 23:40:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.274 00:14:51.274 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.274 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.274 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.532 { 00:14:51.532 "cntlid": 13, 00:14:51.532 "qid": 0, 00:14:51.532 "state": "enabled", 00:14:51.532 "thread": "nvmf_tgt_poll_group_000", 00:14:51.532 "listen_address": { 00:14:51.532 "trtype": "RDMA", 00:14:51.532 "adrfam": "IPv4", 00:14:51.532 "traddr": "192.168.100.8", 00:14:51.532 "trsvcid": "4420" 00:14:51.532 }, 00:14:51.532 "peer_address": { 00:14:51.532 "trtype": "RDMA", 00:14:51.532 "adrfam": "IPv4", 00:14:51.532 "traddr": "192.168.100.8", 00:14:51.532 "trsvcid": "51475" 00:14:51.532 }, 00:14:51.532 "auth": { 00:14:51.532 "state": "completed", 00:14:51.532 "digest": "sha256", 00:14:51.532 "dhgroup": "ffdhe2048" 00:14:51.532 } 00:14:51.532 } 00:14:51.532 ]' 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:51.532 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.791 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.791 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.791 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.791 23:40:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.726 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.985 00:14:52.985 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.985 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.985 23:40:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.250 { 00:14:53.250 "cntlid": 15, 00:14:53.250 "qid": 0, 00:14:53.250 "state": "enabled", 00:14:53.250 "thread": "nvmf_tgt_poll_group_000", 00:14:53.250 "listen_address": { 00:14:53.250 "trtype": "RDMA", 00:14:53.250 "adrfam": "IPv4", 00:14:53.250 "traddr": "192.168.100.8", 00:14:53.250 "trsvcid": "4420" 00:14:53.250 }, 00:14:53.250 "peer_address": { 00:14:53.250 "trtype": "RDMA", 00:14:53.250 "adrfam": "IPv4", 00:14:53.250 "traddr": "192.168.100.8", 00:14:53.250 "trsvcid": "35914" 00:14:53.250 }, 00:14:53.250 "auth": { 00:14:53.250 "state": "completed", 00:14:53.250 "digest": "sha256", 00:14:53.250 "dhgroup": "ffdhe2048" 00:14:53.250 } 00:14:53.250 } 00:14:53.250 ]' 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.250 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.508 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:14:54.072 23:40:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.330 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.588 00:14:54.588 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.588 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.588 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:54.845 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.845 { 00:14:54.845 "cntlid": 17, 00:14:54.845 "qid": 0, 00:14:54.845 "state": "enabled", 00:14:54.845 "thread": "nvmf_tgt_poll_group_000", 00:14:54.845 "listen_address": { 00:14:54.845 "trtype": "RDMA", 00:14:54.845 "adrfam": "IPv4", 00:14:54.845 "traddr": "192.168.100.8", 00:14:54.845 "trsvcid": "4420" 00:14:54.845 }, 00:14:54.845 "peer_address": { 00:14:54.845 "trtype": "RDMA", 00:14:54.845 "adrfam": "IPv4", 00:14:54.845 "traddr": "192.168.100.8", 00:14:54.846 "trsvcid": "44994" 00:14:54.846 }, 00:14:54.846 "auth": { 00:14:54.846 "state": "completed", 00:14:54.846 "digest": "sha256", 00:14:54.846 "dhgroup": "ffdhe3072" 00:14:54.846 } 00:14:54.846 } 00:14:54.846 ]' 00:14:54.846 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.846 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.846 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.846 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:54.846 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.103 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.103 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.103 23:40:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.103 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:14:55.667 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:55.925 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.183 23:40:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:56.183 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.183 23:40:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.183 00:14:56.183 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.183 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.183 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.441 { 00:14:56.441 "cntlid": 19, 00:14:56.441 "qid": 0, 00:14:56.441 "state": "enabled", 00:14:56.441 "thread": "nvmf_tgt_poll_group_000", 00:14:56.441 "listen_address": { 00:14:56.441 "trtype": "RDMA", 00:14:56.441 "adrfam": "IPv4", 00:14:56.441 "traddr": "192.168.100.8", 00:14:56.441 "trsvcid": "4420" 00:14:56.441 }, 00:14:56.441 "peer_address": { 00:14:56.441 "trtype": "RDMA", 00:14:56.441 "adrfam": "IPv4", 00:14:56.441 "traddr": "192.168.100.8", 00:14:56.441 "trsvcid": "44558" 00:14:56.441 }, 00:14:56.441 "auth": { 00:14:56.441 "state": "completed", 00:14:56.441 "digest": "sha256", 00:14:56.441 "dhgroup": "ffdhe3072" 00:14:56.441 } 00:14:56.441 } 00:14:56.441 ]' 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.441 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.699 23:40:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.635 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.894 00:14:57.894 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.894 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.894 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.152 { 00:14:58.152 "cntlid": 21, 00:14:58.152 "qid": 0, 00:14:58.152 "state": "enabled", 00:14:58.152 "thread": "nvmf_tgt_poll_group_000", 00:14:58.152 "listen_address": { 00:14:58.152 "trtype": "RDMA", 00:14:58.152 "adrfam": "IPv4", 00:14:58.152 "traddr": "192.168.100.8", 00:14:58.152 "trsvcid": "4420" 00:14:58.152 }, 00:14:58.152 "peer_address": { 00:14:58.152 "trtype": "RDMA", 00:14:58.152 "adrfam": "IPv4", 00:14:58.152 "traddr": "192.168.100.8", 00:14:58.152 "trsvcid": "52313" 00:14:58.152 }, 00:14:58.152 "auth": { 00:14:58.152 "state": "completed", 00:14:58.152 "digest": "sha256", 00:14:58.152 "dhgroup": "ffdhe3072" 00:14:58.152 } 00:14:58.152 } 00:14:58.152 ]' 00:14:58.152 23:40:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.152 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.411 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:14:58.979 23:40:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.238 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.496 00:14:59.496 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.496 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.496 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.755 { 00:14:59.755 "cntlid": 23, 00:14:59.755 "qid": 0, 00:14:59.755 "state": "enabled", 00:14:59.755 "thread": "nvmf_tgt_poll_group_000", 00:14:59.755 "listen_address": { 00:14:59.755 "trtype": "RDMA", 00:14:59.755 "adrfam": "IPv4", 00:14:59.755 "traddr": "192.168.100.8", 00:14:59.755 "trsvcid": "4420" 00:14:59.755 }, 00:14:59.755 "peer_address": { 00:14:59.755 "trtype": "RDMA", 00:14:59.755 "adrfam": "IPv4", 00:14:59.755 "traddr": "192.168.100.8", 00:14:59.755 "trsvcid": "37137" 00:14:59.755 }, 00:14:59.755 "auth": { 00:14:59.755 "state": "completed", 00:14:59.755 "digest": "sha256", 00:14:59.755 "dhgroup": "ffdhe3072" 00:14:59.755 } 00:14:59.755 } 00:14:59.755 ]' 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:59.755 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.013 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.013 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.013 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.013 23:40:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:00.579 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:00.837 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.095 23:40:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.352 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:01.352 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.352 { 00:15:01.352 "cntlid": 25, 00:15:01.352 "qid": 0, 00:15:01.352 "state": "enabled", 00:15:01.352 "thread": "nvmf_tgt_poll_group_000", 00:15:01.352 "listen_address": { 00:15:01.352 "trtype": "RDMA", 00:15:01.352 "adrfam": "IPv4", 00:15:01.352 "traddr": "192.168.100.8", 00:15:01.352 "trsvcid": "4420" 00:15:01.352 }, 00:15:01.352 "peer_address": { 00:15:01.352 "trtype": "RDMA", 00:15:01.352 "adrfam": "IPv4", 00:15:01.352 "traddr": "192.168.100.8", 00:15:01.352 "trsvcid": "48312" 00:15:01.352 }, 00:15:01.352 "auth": { 00:15:01.352 "state": "completed", 00:15:01.352 "digest": "sha256", 00:15:01.353 "dhgroup": "ffdhe4096" 00:15:01.353 } 00:15:01.353 } 00:15:01.353 ]' 00:15:01.353 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.611 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.871 23:40:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.438 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.696 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.953 00:15:02.953 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.953 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.953 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.211 { 00:15:03.211 "cntlid": 27, 00:15:03.211 "qid": 0, 00:15:03.211 "state": "enabled", 00:15:03.211 "thread": "nvmf_tgt_poll_group_000", 00:15:03.211 "listen_address": { 00:15:03.211 "trtype": "RDMA", 00:15:03.211 "adrfam": "IPv4", 00:15:03.211 "traddr": "192.168.100.8", 00:15:03.211 "trsvcid": "4420" 00:15:03.211 }, 00:15:03.211 "peer_address": { 00:15:03.211 "trtype": "RDMA", 00:15:03.211 "adrfam": "IPv4", 00:15:03.211 "traddr": "192.168.100.8", 00:15:03.211 "trsvcid": "36557" 00:15:03.211 }, 00:15:03.211 "auth": { 00:15:03.211 "state": "completed", 00:15:03.211 "digest": "sha256", 00:15:03.211 "dhgroup": "ffdhe4096" 00:15:03.211 } 00:15:03.211 } 00:15:03.211 ]' 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.211 23:40:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.211 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.470 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.036 23:40:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.295 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.553 00:15:04.553 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.553 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.553 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:04.813 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.813 { 00:15:04.813 "cntlid": 29, 00:15:04.813 "qid": 0, 00:15:04.813 "state": "enabled", 00:15:04.813 "thread": "nvmf_tgt_poll_group_000", 00:15:04.814 "listen_address": { 00:15:04.814 "trtype": "RDMA", 00:15:04.814 "adrfam": "IPv4", 00:15:04.814 "traddr": "192.168.100.8", 00:15:04.814 "trsvcid": "4420" 00:15:04.814 }, 00:15:04.814 "peer_address": { 00:15:04.814 "trtype": "RDMA", 00:15:04.814 "adrfam": "IPv4", 00:15:04.814 "traddr": "192.168.100.8", 00:15:04.814 "trsvcid": "51730" 00:15:04.814 }, 00:15:04.814 "auth": { 00:15:04.814 "state": "completed", 00:15:04.814 "digest": "sha256", 00:15:04.814 "dhgroup": "ffdhe4096" 00:15:04.814 } 00:15:04.814 } 00:15:04.814 ]' 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.814 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.073 23:40:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:05.639 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.897 23:40:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.156 00:15:06.156 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.156 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.156 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.414 { 00:15:06.414 "cntlid": 31, 00:15:06.414 "qid": 0, 00:15:06.414 "state": "enabled", 00:15:06.414 "thread": "nvmf_tgt_poll_group_000", 00:15:06.414 "listen_address": { 00:15:06.414 "trtype": "RDMA", 00:15:06.414 "adrfam": "IPv4", 00:15:06.414 "traddr": "192.168.100.8", 00:15:06.414 "trsvcid": "4420" 00:15:06.414 }, 00:15:06.414 "peer_address": { 00:15:06.414 "trtype": "RDMA", 00:15:06.414 "adrfam": "IPv4", 00:15:06.414 "traddr": "192.168.100.8", 00:15:06.414 "trsvcid": "32983" 00:15:06.414 }, 00:15:06.414 "auth": { 00:15:06.414 "state": "completed", 00:15:06.414 "digest": "sha256", 00:15:06.414 "dhgroup": "ffdhe4096" 00:15:06.414 } 00:15:06.414 } 00:15:06.414 ]' 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:06.414 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.673 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.673 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.673 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.673 23:40:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:07.261 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:07.520 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.778 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.036 00:15:08.036 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.036 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.036 23:40:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.294 { 00:15:08.294 "cntlid": 33, 00:15:08.294 "qid": 0, 00:15:08.294 "state": "enabled", 00:15:08.294 "thread": "nvmf_tgt_poll_group_000", 00:15:08.294 "listen_address": { 00:15:08.294 "trtype": "RDMA", 00:15:08.294 "adrfam": "IPv4", 00:15:08.294 "traddr": "192.168.100.8", 00:15:08.294 "trsvcid": "4420" 00:15:08.294 }, 00:15:08.294 "peer_address": { 00:15:08.294 "trtype": "RDMA", 00:15:08.294 "adrfam": "IPv4", 00:15:08.294 "traddr": "192.168.100.8", 00:15:08.294 "trsvcid": "42673" 00:15:08.294 }, 00:15:08.294 "auth": { 00:15:08.294 "state": "completed", 00:15:08.294 "digest": "sha256", 00:15:08.294 "dhgroup": "ffdhe6144" 00:15:08.294 } 00:15:08.294 } 00:15:08.294 ]' 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.294 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.553 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:09.120 23:40:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.120 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.120 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:09.120 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:09.120 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.379 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.946 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.946 { 00:15:09.946 "cntlid": 35, 00:15:09.946 "qid": 0, 00:15:09.946 "state": "enabled", 00:15:09.946 "thread": "nvmf_tgt_poll_group_000", 00:15:09.946 "listen_address": { 00:15:09.946 "trtype": "RDMA", 00:15:09.946 "adrfam": "IPv4", 00:15:09.946 "traddr": "192.168.100.8", 00:15:09.946 "trsvcid": "4420" 00:15:09.946 }, 00:15:09.946 "peer_address": { 00:15:09.946 "trtype": "RDMA", 00:15:09.946 "adrfam": "IPv4", 00:15:09.946 "traddr": "192.168.100.8", 00:15:09.946 "trsvcid": "57630" 00:15:09.946 }, 00:15:09.946 "auth": { 00:15:09.946 "state": "completed", 00:15:09.946 "digest": "sha256", 00:15:09.946 "dhgroup": "ffdhe6144" 00:15:09.946 } 00:15:09.946 } 00:15:09.946 ]' 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:09.946 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.206 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.206 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.206 23:40:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.206 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:10.774 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.033 23:40:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.292 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.551 00:15:11.551 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.551 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.551 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.809 { 00:15:11.809 "cntlid": 37, 00:15:11.809 "qid": 0, 00:15:11.809 "state": "enabled", 00:15:11.809 "thread": "nvmf_tgt_poll_group_000", 00:15:11.809 "listen_address": { 00:15:11.809 "trtype": "RDMA", 00:15:11.809 "adrfam": "IPv4", 00:15:11.809 "traddr": "192.168.100.8", 00:15:11.809 "trsvcid": "4420" 00:15:11.809 }, 00:15:11.809 "peer_address": { 00:15:11.809 "trtype": "RDMA", 00:15:11.809 "adrfam": "IPv4", 00:15:11.809 "traddr": "192.168.100.8", 00:15:11.809 "trsvcid": "46100" 00:15:11.809 }, 00:15:11.809 "auth": { 00:15:11.809 "state": "completed", 00:15:11.809 "digest": "sha256", 00:15:11.809 "dhgroup": "ffdhe6144" 00:15:11.809 } 00:15:11.809 } 00:15:11.809 ]' 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.809 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.067 23:41:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.633 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:12.891 23:41:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.150 00:15:13.150 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.150 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.150 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.409 { 00:15:13.409 "cntlid": 39, 00:15:13.409 "qid": 0, 00:15:13.409 "state": "enabled", 00:15:13.409 "thread": "nvmf_tgt_poll_group_000", 00:15:13.409 "listen_address": { 00:15:13.409 "trtype": "RDMA", 00:15:13.409 "adrfam": "IPv4", 00:15:13.409 "traddr": "192.168.100.8", 00:15:13.409 "trsvcid": "4420" 00:15:13.409 }, 00:15:13.409 "peer_address": { 00:15:13.409 "trtype": "RDMA", 00:15:13.409 "adrfam": "IPv4", 00:15:13.409 "traddr": "192.168.100.8", 00:15:13.409 "trsvcid": "45410" 00:15:13.409 }, 00:15:13.409 "auth": { 00:15:13.409 "state": "completed", 00:15:13.409 "digest": "sha256", 00:15:13.409 "dhgroup": "ffdhe6144" 00:15:13.409 } 00:15:13.409 } 00:15:13.409 ]' 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.409 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.667 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.667 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.667 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:13.667 23:41:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:14.600 23:41:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.165 00:15:15.165 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.165 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.165 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.423 { 00:15:15.423 "cntlid": 41, 00:15:15.423 "qid": 0, 00:15:15.423 "state": "enabled", 00:15:15.423 "thread": "nvmf_tgt_poll_group_000", 00:15:15.423 "listen_address": { 00:15:15.423 "trtype": "RDMA", 00:15:15.423 "adrfam": "IPv4", 00:15:15.423 "traddr": "192.168.100.8", 00:15:15.423 "trsvcid": "4420" 00:15:15.423 }, 00:15:15.423 "peer_address": { 00:15:15.423 "trtype": "RDMA", 00:15:15.423 "adrfam": "IPv4", 00:15:15.423 "traddr": "192.168.100.8", 00:15:15.423 "trsvcid": "58412" 00:15:15.423 }, 00:15:15.423 "auth": { 00:15:15.423 "state": "completed", 00:15:15.423 "digest": "sha256", 00:15:15.423 "dhgroup": "ffdhe8192" 00:15:15.423 } 00:15:15.423 } 00:15:15.423 ]' 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.423 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.681 23:41:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.326 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:16.585 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.153 00:15:17.153 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.153 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.153 23:41:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.153 { 00:15:17.153 "cntlid": 43, 00:15:17.153 "qid": 0, 00:15:17.153 "state": "enabled", 00:15:17.153 "thread": "nvmf_tgt_poll_group_000", 00:15:17.153 "listen_address": { 00:15:17.153 "trtype": "RDMA", 00:15:17.153 "adrfam": "IPv4", 00:15:17.153 "traddr": "192.168.100.8", 00:15:17.153 "trsvcid": "4420" 00:15:17.153 }, 00:15:17.153 "peer_address": { 00:15:17.153 "trtype": "RDMA", 00:15:17.153 "adrfam": "IPv4", 00:15:17.153 "traddr": "192.168.100.8", 00:15:17.153 "trsvcid": "32992" 00:15:17.153 }, 00:15:17.153 "auth": { 00:15:17.153 "state": "completed", 00:15:17.153 "digest": "sha256", 00:15:17.153 "dhgroup": "ffdhe8192" 00:15:17.153 } 00:15:17.153 } 00:15:17.153 ]' 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.153 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.412 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:18.348 23:41:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.349 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.916 00:15:18.916 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.916 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.916 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:19.174 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.174 { 00:15:19.174 "cntlid": 45, 00:15:19.175 "qid": 0, 00:15:19.175 "state": "enabled", 00:15:19.175 "thread": "nvmf_tgt_poll_group_000", 00:15:19.175 "listen_address": { 00:15:19.175 "trtype": "RDMA", 00:15:19.175 "adrfam": "IPv4", 00:15:19.175 "traddr": "192.168.100.8", 00:15:19.175 "trsvcid": "4420" 00:15:19.175 }, 00:15:19.175 "peer_address": { 00:15:19.175 "trtype": "RDMA", 00:15:19.175 "adrfam": "IPv4", 00:15:19.175 "traddr": "192.168.100.8", 00:15:19.175 "trsvcid": "38494" 00:15:19.175 }, 00:15:19.175 "auth": { 00:15:19.175 "state": "completed", 00:15:19.175 "digest": "sha256", 00:15:19.175 "dhgroup": "ffdhe8192" 00:15:19.175 } 00:15:19.175 } 00:15:19.175 ]' 00:15:19.175 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.175 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:19.175 23:41:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.175 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:19.175 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.175 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.175 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.175 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.433 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.000 23:41:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.258 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.826 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:20.826 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.085 { 00:15:21.085 "cntlid": 47, 00:15:21.085 "qid": 0, 00:15:21.085 "state": "enabled", 00:15:21.085 "thread": "nvmf_tgt_poll_group_000", 00:15:21.085 "listen_address": { 00:15:21.085 "trtype": "RDMA", 00:15:21.085 "adrfam": "IPv4", 00:15:21.085 "traddr": "192.168.100.8", 00:15:21.085 "trsvcid": "4420" 00:15:21.085 }, 00:15:21.085 "peer_address": { 00:15:21.085 "trtype": "RDMA", 00:15:21.085 "adrfam": "IPv4", 00:15:21.085 "traddr": "192.168.100.8", 00:15:21.085 "trsvcid": "51069" 00:15:21.085 }, 00:15:21.085 "auth": { 00:15:21.085 "state": "completed", 00:15:21.085 "digest": "sha256", 00:15:21.085 "dhgroup": "ffdhe8192" 00:15:21.085 } 00:15:21.085 } 00:15:21.085 ]' 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.085 23:41:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.344 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:21.910 23:41:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.169 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.427 00:15:22.427 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.427 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.427 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.686 { 00:15:22.686 "cntlid": 49, 00:15:22.686 "qid": 0, 00:15:22.686 "state": "enabled", 00:15:22.686 "thread": "nvmf_tgt_poll_group_000", 00:15:22.686 "listen_address": { 00:15:22.686 "trtype": "RDMA", 00:15:22.686 "adrfam": "IPv4", 00:15:22.686 "traddr": "192.168.100.8", 00:15:22.686 "trsvcid": "4420" 00:15:22.686 }, 00:15:22.686 "peer_address": { 00:15:22.686 "trtype": "RDMA", 00:15:22.686 "adrfam": "IPv4", 00:15:22.686 "traddr": "192.168.100.8", 00:15:22.686 "trsvcid": "38775" 00:15:22.686 }, 00:15:22.686 "auth": { 00:15:22.686 "state": "completed", 00:15:22.686 "digest": "sha384", 00:15:22.686 "dhgroup": "null" 00:15:22.686 } 00:15:22.686 } 00:15:22.686 ]' 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.686 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.944 23:41:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:23.511 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:23.512 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.770 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.029 00:15:24.029 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.029 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.029 23:41:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.287 { 00:15:24.287 "cntlid": 51, 00:15:24.287 "qid": 0, 00:15:24.287 "state": "enabled", 00:15:24.287 "thread": "nvmf_tgt_poll_group_000", 00:15:24.287 "listen_address": { 00:15:24.287 "trtype": "RDMA", 00:15:24.287 "adrfam": "IPv4", 00:15:24.287 "traddr": "192.168.100.8", 00:15:24.287 "trsvcid": "4420" 00:15:24.287 }, 00:15:24.287 "peer_address": { 00:15:24.287 "trtype": "RDMA", 00:15:24.287 "adrfam": "IPv4", 00:15:24.287 "traddr": "192.168.100.8", 00:15:24.287 "trsvcid": "53453" 00:15:24.287 }, 00:15:24.287 "auth": { 00:15:24.287 "state": "completed", 00:15:24.287 "digest": "sha384", 00:15:24.287 "dhgroup": "null" 00:15:24.287 } 00:15:24.287 } 00:15:24.287 ]' 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.287 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.545 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:25.111 23:41:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.370 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.629 00:15:25.629 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.629 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.629 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.887 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.887 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.887 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:25.887 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.888 { 00:15:25.888 "cntlid": 53, 00:15:25.888 "qid": 0, 00:15:25.888 "state": "enabled", 00:15:25.888 "thread": "nvmf_tgt_poll_group_000", 00:15:25.888 "listen_address": { 00:15:25.888 "trtype": "RDMA", 00:15:25.888 "adrfam": "IPv4", 00:15:25.888 "traddr": "192.168.100.8", 00:15:25.888 "trsvcid": "4420" 00:15:25.888 }, 00:15:25.888 "peer_address": { 00:15:25.888 "trtype": "RDMA", 00:15:25.888 "adrfam": "IPv4", 00:15:25.888 "traddr": "192.168.100.8", 00:15:25.888 "trsvcid": "37245" 00:15:25.888 }, 00:15:25.888 "auth": { 00:15:25.888 "state": "completed", 00:15:25.888 "digest": "sha384", 00:15:25.888 "dhgroup": "null" 00:15:25.888 } 00:15:25.888 } 00:15:25.888 ]' 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.888 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.147 23:41:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:26.714 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.974 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.975 23:41:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.234 00:15:27.234 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.235 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.235 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:27.493 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.493 { 00:15:27.493 "cntlid": 55, 00:15:27.493 "qid": 0, 00:15:27.493 "state": "enabled", 00:15:27.493 "thread": "nvmf_tgt_poll_group_000", 00:15:27.493 "listen_address": { 00:15:27.493 "trtype": "RDMA", 00:15:27.493 "adrfam": "IPv4", 00:15:27.493 "traddr": "192.168.100.8", 00:15:27.493 "trsvcid": "4420" 00:15:27.493 }, 00:15:27.493 "peer_address": { 00:15:27.493 "trtype": "RDMA", 00:15:27.493 "adrfam": "IPv4", 00:15:27.493 "traddr": "192.168.100.8", 00:15:27.493 "trsvcid": "55090" 00:15:27.493 }, 00:15:27.493 "auth": { 00:15:27.493 "state": "completed", 00:15:27.493 "digest": "sha384", 00:15:27.493 "dhgroup": "null" 00:15:27.493 } 00:15:27.493 } 00:15:27.493 ]' 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.494 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.752 23:41:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:28.321 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:28.579 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.838 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.838 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.097 { 00:15:29.097 "cntlid": 57, 00:15:29.097 "qid": 0, 00:15:29.097 "state": "enabled", 00:15:29.097 "thread": "nvmf_tgt_poll_group_000", 00:15:29.097 "listen_address": { 00:15:29.097 "trtype": "RDMA", 00:15:29.097 "adrfam": "IPv4", 00:15:29.097 "traddr": "192.168.100.8", 00:15:29.097 "trsvcid": "4420" 00:15:29.097 }, 00:15:29.097 "peer_address": { 00:15:29.097 "trtype": "RDMA", 00:15:29.097 "adrfam": "IPv4", 00:15:29.097 "traddr": "192.168.100.8", 00:15:29.097 "trsvcid": "53979" 00:15:29.097 }, 00:15:29.097 "auth": { 00:15:29.097 "state": "completed", 00:15:29.097 "digest": "sha384", 00:15:29.097 "dhgroup": "ffdhe2048" 00:15:29.097 } 00:15:29.097 } 00:15:29.097 ]' 00:15:29.097 23:41:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.097 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.097 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.097 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:29.097 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.355 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.355 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.355 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.355 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:30.290 23:41:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:30.290 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.291 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.549 00:15:30.549 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.549 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.549 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.807 { 00:15:30.807 "cntlid": 59, 00:15:30.807 "qid": 0, 00:15:30.807 "state": "enabled", 00:15:30.807 "thread": "nvmf_tgt_poll_group_000", 00:15:30.807 "listen_address": { 00:15:30.807 "trtype": "RDMA", 00:15:30.807 "adrfam": "IPv4", 00:15:30.807 "traddr": "192.168.100.8", 00:15:30.807 "trsvcid": "4420" 00:15:30.807 }, 00:15:30.807 "peer_address": { 00:15:30.807 "trtype": "RDMA", 00:15:30.807 "adrfam": "IPv4", 00:15:30.807 "traddr": "192.168.100.8", 00:15:30.807 "trsvcid": "50494" 00:15:30.807 }, 00:15:30.807 "auth": { 00:15:30.807 "state": "completed", 00:15:30.807 "digest": "sha384", 00:15:30.807 "dhgroup": "ffdhe2048" 00:15:30.807 } 00:15:30.807 } 00:15:30.807 ]' 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.807 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.066 23:41:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:31.633 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:31.891 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.892 23:41:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.150 00:15:32.150 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.150 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.150 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.408 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.408 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.408 23:41:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.409 { 00:15:32.409 "cntlid": 61, 00:15:32.409 "qid": 0, 00:15:32.409 "state": "enabled", 00:15:32.409 "thread": "nvmf_tgt_poll_group_000", 00:15:32.409 "listen_address": { 00:15:32.409 "trtype": "RDMA", 00:15:32.409 "adrfam": "IPv4", 00:15:32.409 "traddr": "192.168.100.8", 00:15:32.409 "trsvcid": "4420" 00:15:32.409 }, 00:15:32.409 "peer_address": { 00:15:32.409 "trtype": "RDMA", 00:15:32.409 "adrfam": "IPv4", 00:15:32.409 "traddr": "192.168.100.8", 00:15:32.409 "trsvcid": "57734" 00:15:32.409 }, 00:15:32.409 "auth": { 00:15:32.409 "state": "completed", 00:15:32.409 "digest": "sha384", 00:15:32.409 "dhgroup": "ffdhe2048" 00:15:32.409 } 00:15:32.409 } 00:15:32.409 ]' 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:32.409 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.667 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.667 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.667 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.667 23:41:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:33.233 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.491 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:33.491 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:33.491 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.491 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:33.491 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.492 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.492 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.750 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.750 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.008 { 00:15:34.008 "cntlid": 63, 00:15:34.008 "qid": 0, 00:15:34.008 "state": "enabled", 00:15:34.008 "thread": "nvmf_tgt_poll_group_000", 00:15:34.008 "listen_address": { 00:15:34.008 "trtype": "RDMA", 00:15:34.008 "adrfam": "IPv4", 00:15:34.008 "traddr": "192.168.100.8", 00:15:34.008 "trsvcid": "4420" 00:15:34.008 }, 00:15:34.008 "peer_address": { 00:15:34.008 "trtype": "RDMA", 00:15:34.008 "adrfam": "IPv4", 00:15:34.008 "traddr": "192.168.100.8", 00:15:34.008 "trsvcid": "57686" 00:15:34.008 }, 00:15:34.008 "auth": { 00:15:34.008 "state": "completed", 00:15:34.008 "digest": "sha384", 00:15:34.008 "dhgroup": "ffdhe2048" 00:15:34.008 } 00:15:34.008 } 00:15:34.008 ]' 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.008 23:41:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.266 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:35.196 23:41:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.196 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.197 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.453 00:15:35.453 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.453 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.453 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.711 { 00:15:35.711 "cntlid": 65, 00:15:35.711 "qid": 0, 00:15:35.711 "state": "enabled", 00:15:35.711 "thread": "nvmf_tgt_poll_group_000", 00:15:35.711 "listen_address": { 00:15:35.711 "trtype": "RDMA", 00:15:35.711 "adrfam": "IPv4", 00:15:35.711 "traddr": "192.168.100.8", 00:15:35.711 "trsvcid": "4420" 00:15:35.711 }, 00:15:35.711 "peer_address": { 00:15:35.711 "trtype": "RDMA", 00:15:35.711 "adrfam": "IPv4", 00:15:35.711 "traddr": "192.168.100.8", 00:15:35.711 "trsvcid": "45028" 00:15:35.711 }, 00:15:35.711 "auth": { 00:15:35.711 "state": "completed", 00:15:35.711 "digest": "sha384", 00:15:35.711 "dhgroup": "ffdhe3072" 00:15:35.711 } 00:15:35.711 } 00:15:35.711 ]' 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.711 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.969 23:41:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:36.534 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.791 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:36.792 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.049 23:41:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.049 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.049 23:41:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.049 00:15:37.049 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.308 { 00:15:37.308 "cntlid": 67, 00:15:37.308 "qid": 0, 00:15:37.308 "state": "enabled", 00:15:37.308 "thread": "nvmf_tgt_poll_group_000", 00:15:37.308 "listen_address": { 00:15:37.308 "trtype": "RDMA", 00:15:37.308 "adrfam": "IPv4", 00:15:37.308 "traddr": "192.168.100.8", 00:15:37.308 "trsvcid": "4420" 00:15:37.308 }, 00:15:37.308 "peer_address": { 00:15:37.308 "trtype": "RDMA", 00:15:37.308 "adrfam": "IPv4", 00:15:37.308 "traddr": "192.168.100.8", 00:15:37.308 "trsvcid": "55126" 00:15:37.308 }, 00:15:37.308 "auth": { 00:15:37.308 "state": "completed", 00:15:37.308 "digest": "sha384", 00:15:37.308 "dhgroup": "ffdhe3072" 00:15:37.308 } 00:15:37.308 } 00:15:37.308 ]' 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.308 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.566 23:41:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:38.499 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.500 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.758 00:15:38.758 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.758 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.758 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.015 { 00:15:39.015 "cntlid": 69, 00:15:39.015 "qid": 0, 00:15:39.015 "state": "enabled", 00:15:39.015 "thread": "nvmf_tgt_poll_group_000", 00:15:39.015 "listen_address": { 00:15:39.015 "trtype": "RDMA", 00:15:39.015 "adrfam": "IPv4", 00:15:39.015 "traddr": "192.168.100.8", 00:15:39.015 "trsvcid": "4420" 00:15:39.015 }, 00:15:39.015 "peer_address": { 00:15:39.015 "trtype": "RDMA", 00:15:39.015 "adrfam": "IPv4", 00:15:39.015 "traddr": "192.168.100.8", 00:15:39.015 "trsvcid": "56252" 00:15:39.015 }, 00:15:39.015 "auth": { 00:15:39.015 "state": "completed", 00:15:39.015 "digest": "sha384", 00:15:39.015 "dhgroup": "ffdhe3072" 00:15:39.015 } 00:15:39.015 } 00:15:39.015 ]' 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.015 23:41:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.274 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:39.841 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.100 23:41:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.100 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.359 00:15:40.359 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:40.359 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:40.359 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:40.617 { 00:15:40.617 "cntlid": 71, 00:15:40.617 "qid": 0, 00:15:40.617 "state": "enabled", 00:15:40.617 "thread": "nvmf_tgt_poll_group_000", 00:15:40.617 "listen_address": { 00:15:40.617 "trtype": "RDMA", 00:15:40.617 "adrfam": "IPv4", 00:15:40.617 "traddr": "192.168.100.8", 00:15:40.617 "trsvcid": "4420" 00:15:40.617 }, 00:15:40.617 "peer_address": { 00:15:40.617 "trtype": "RDMA", 00:15:40.617 "adrfam": "IPv4", 00:15:40.617 "traddr": "192.168.100.8", 00:15:40.617 "trsvcid": "45375" 00:15:40.617 }, 00:15:40.617 "auth": { 00:15:40.617 "state": "completed", 00:15:40.617 "digest": "sha384", 00:15:40.617 "dhgroup": "ffdhe3072" 00:15:40.617 } 00:15:40.617 } 00:15:40.617 ]' 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:40.617 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.875 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.875 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.875 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.875 23:41:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:41.440 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.697 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.955 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.213 00:15:42.213 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.213 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.213 23:41:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.213 { 00:15:42.213 "cntlid": 73, 00:15:42.213 "qid": 0, 00:15:42.213 "state": "enabled", 00:15:42.213 "thread": "nvmf_tgt_poll_group_000", 00:15:42.213 "listen_address": { 00:15:42.213 "trtype": "RDMA", 00:15:42.213 "adrfam": "IPv4", 00:15:42.213 "traddr": "192.168.100.8", 00:15:42.213 "trsvcid": "4420" 00:15:42.213 }, 00:15:42.213 "peer_address": { 00:15:42.213 "trtype": "RDMA", 00:15:42.213 "adrfam": "IPv4", 00:15:42.213 "traddr": "192.168.100.8", 00:15:42.213 "trsvcid": "54384" 00:15:42.213 }, 00:15:42.213 "auth": { 00:15:42.213 "state": "completed", 00:15:42.213 "digest": "sha384", 00:15:42.213 "dhgroup": "ffdhe4096" 00:15:42.213 } 00:15:42.213 } 00:15:42.213 ]' 00:15:42.213 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.471 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.730 23:41:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.343 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.343 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.613 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.871 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:43.871 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.129 { 00:15:44.129 "cntlid": 75, 00:15:44.129 "qid": 0, 00:15:44.129 "state": "enabled", 00:15:44.129 "thread": "nvmf_tgt_poll_group_000", 00:15:44.129 "listen_address": { 00:15:44.129 "trtype": "RDMA", 00:15:44.129 "adrfam": "IPv4", 00:15:44.129 "traddr": "192.168.100.8", 00:15:44.129 "trsvcid": "4420" 00:15:44.129 }, 00:15:44.129 "peer_address": { 00:15:44.129 "trtype": "RDMA", 00:15:44.129 "adrfam": "IPv4", 00:15:44.129 "traddr": "192.168.100.8", 00:15:44.129 "trsvcid": "48053" 00:15:44.129 }, 00:15:44.129 "auth": { 00:15:44.129 "state": "completed", 00:15:44.129 "digest": "sha384", 00:15:44.129 "dhgroup": "ffdhe4096" 00:15:44.129 } 00:15:44.129 } 00:15:44.129 ]' 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.129 23:41:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.388 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:44.952 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.952 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:44.952 23:41:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:44.952 23:41:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.210 23:41:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.210 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.210 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:45.210 23:41:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.210 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:45.468 00:15:45.468 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.468 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.468 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.727 { 00:15:45.727 "cntlid": 77, 00:15:45.727 "qid": 0, 00:15:45.727 "state": "enabled", 00:15:45.727 "thread": "nvmf_tgt_poll_group_000", 00:15:45.727 "listen_address": { 00:15:45.727 "trtype": "RDMA", 00:15:45.727 "adrfam": "IPv4", 00:15:45.727 "traddr": "192.168.100.8", 00:15:45.727 "trsvcid": "4420" 00:15:45.727 }, 00:15:45.727 "peer_address": { 00:15:45.727 "trtype": "RDMA", 00:15:45.727 "adrfam": "IPv4", 00:15:45.727 "traddr": "192.168.100.8", 00:15:45.727 "trsvcid": "58014" 00:15:45.727 }, 00:15:45.727 "auth": { 00:15:45.727 "state": "completed", 00:15:45.727 "digest": "sha384", 00:15:45.727 "dhgroup": "ffdhe4096" 00:15:45.727 } 00:15:45.727 } 00:15:45.727 ]' 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:45.727 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.985 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.985 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.985 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.985 23:41:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:46.551 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.809 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.068 23:41:35 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:47.326 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.326 { 00:15:47.326 "cntlid": 79, 00:15:47.326 "qid": 0, 00:15:47.326 "state": "enabled", 00:15:47.326 "thread": "nvmf_tgt_poll_group_000", 00:15:47.326 "listen_address": { 00:15:47.326 "trtype": "RDMA", 00:15:47.326 "adrfam": "IPv4", 00:15:47.326 "traddr": "192.168.100.8", 00:15:47.326 "trsvcid": "4420" 00:15:47.326 }, 00:15:47.326 "peer_address": { 00:15:47.326 "trtype": "RDMA", 00:15:47.326 "adrfam": "IPv4", 00:15:47.326 "traddr": "192.168.100.8", 00:15:47.326 "trsvcid": "40196" 00:15:47.326 }, 00:15:47.326 "auth": { 00:15:47.326 "state": "completed", 00:15:47.326 "digest": "sha384", 00:15:47.326 "dhgroup": "ffdhe4096" 00:15:47.326 } 00:15:47.326 } 00:15:47.326 ]' 00:15:47.326 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.327 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.585 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.843 23:41:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.409 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.667 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:48.926 00:15:48.926 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.926 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.926 23:41:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.184 { 00:15:49.184 "cntlid": 81, 00:15:49.184 "qid": 0, 00:15:49.184 "state": "enabled", 00:15:49.184 "thread": "nvmf_tgt_poll_group_000", 00:15:49.184 "listen_address": { 00:15:49.184 "trtype": "RDMA", 00:15:49.184 "adrfam": "IPv4", 00:15:49.184 "traddr": "192.168.100.8", 00:15:49.184 "trsvcid": "4420" 00:15:49.184 }, 00:15:49.184 "peer_address": { 00:15:49.184 "trtype": "RDMA", 00:15:49.184 "adrfam": "IPv4", 00:15:49.184 "traddr": "192.168.100.8", 00:15:49.184 "trsvcid": "37888" 00:15:49.184 }, 00:15:49.184 "auth": { 00:15:49.184 "state": "completed", 00:15:49.184 "digest": "sha384", 00:15:49.184 "dhgroup": "ffdhe6144" 00:15:49.184 } 00:15:49.184 } 00:15:49.184 ]' 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.184 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.443 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:50.009 23:41:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.267 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.526 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:50.784 00:15:50.784 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.784 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.784 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.043 { 00:15:51.043 "cntlid": 83, 00:15:51.043 "qid": 0, 00:15:51.043 "state": "enabled", 00:15:51.043 "thread": "nvmf_tgt_poll_group_000", 00:15:51.043 "listen_address": { 00:15:51.043 "trtype": "RDMA", 00:15:51.043 "adrfam": "IPv4", 00:15:51.043 "traddr": "192.168.100.8", 00:15:51.043 "trsvcid": "4420" 00:15:51.043 }, 00:15:51.043 "peer_address": { 00:15:51.043 "trtype": "RDMA", 00:15:51.043 "adrfam": "IPv4", 00:15:51.043 "traddr": "192.168.100.8", 00:15:51.043 "trsvcid": "36997" 00:15:51.043 }, 00:15:51.043 "auth": { 00:15:51.043 "state": "completed", 00:15:51.043 "digest": "sha384", 00:15:51.043 "dhgroup": "ffdhe6144" 00:15:51.043 } 00:15:51.043 } 00:15:51.043 ]' 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.043 23:41:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.301 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:51.866 23:41:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:52.124 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.125 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:52.383 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.641 { 00:15:52.641 "cntlid": 85, 00:15:52.641 "qid": 0, 00:15:52.641 "state": "enabled", 00:15:52.641 "thread": "nvmf_tgt_poll_group_000", 00:15:52.641 "listen_address": { 00:15:52.641 "trtype": "RDMA", 00:15:52.641 "adrfam": "IPv4", 00:15:52.641 "traddr": "192.168.100.8", 00:15:52.641 "trsvcid": "4420" 00:15:52.641 }, 00:15:52.641 "peer_address": { 00:15:52.641 "trtype": "RDMA", 00:15:52.641 "adrfam": "IPv4", 00:15:52.641 "traddr": "192.168.100.8", 00:15:52.641 "trsvcid": "40911" 00:15:52.641 }, 00:15:52.641 "auth": { 00:15:52.641 "state": "completed", 00:15:52.641 "digest": "sha384", 00:15:52.641 "dhgroup": "ffdhe6144" 00:15:52.641 } 00:15:52.641 } 00:15:52.641 ]' 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:52.641 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.899 23:41:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:53.833 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.834 23:41:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:54.400 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.400 { 00:15:54.400 "cntlid": 87, 00:15:54.400 "qid": 0, 00:15:54.400 "state": "enabled", 00:15:54.400 "thread": "nvmf_tgt_poll_group_000", 00:15:54.400 "listen_address": { 00:15:54.400 "trtype": "RDMA", 00:15:54.400 "adrfam": "IPv4", 00:15:54.400 "traddr": "192.168.100.8", 00:15:54.400 "trsvcid": "4420" 00:15:54.400 }, 00:15:54.400 "peer_address": { 00:15:54.400 "trtype": "RDMA", 00:15:54.400 "adrfam": "IPv4", 00:15:54.400 "traddr": "192.168.100.8", 00:15:54.400 "trsvcid": "39195" 00:15:54.400 }, 00:15:54.400 "auth": { 00:15:54.400 "state": "completed", 00:15:54.400 "digest": "sha384", 00:15:54.400 "dhgroup": "ffdhe6144" 00:15:54.400 } 00:15:54.400 } 00:15:54.400 ]' 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:54.400 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.658 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.658 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.658 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.658 23:41:43 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:15:55.223 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.481 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.738 23:41:44 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:55.739 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.739 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.996 00:15:55.996 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.996 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.996 23:41:44 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.255 { 00:15:56.255 "cntlid": 89, 00:15:56.255 "qid": 0, 00:15:56.255 "state": "enabled", 00:15:56.255 "thread": "nvmf_tgt_poll_group_000", 00:15:56.255 "listen_address": { 00:15:56.255 "trtype": "RDMA", 00:15:56.255 "adrfam": "IPv4", 00:15:56.255 "traddr": "192.168.100.8", 00:15:56.255 "trsvcid": "4420" 00:15:56.255 }, 00:15:56.255 "peer_address": { 00:15:56.255 "trtype": "RDMA", 00:15:56.255 "adrfam": "IPv4", 00:15:56.255 "traddr": "192.168.100.8", 00:15:56.255 "trsvcid": "48486" 00:15:56.255 }, 00:15:56.255 "auth": { 00:15:56.255 "state": "completed", 00:15:56.255 "digest": "sha384", 00:15:56.255 "dhgroup": "ffdhe8192" 00:15:56.255 } 00:15:56.255 } 00:15:56.255 ]' 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.255 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.513 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.513 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.513 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.513 23:41:45 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:15:57.080 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.338 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.597 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.855 00:15:57.855 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.855 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.855 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.114 { 00:15:58.114 "cntlid": 91, 00:15:58.114 "qid": 0, 00:15:58.114 "state": "enabled", 00:15:58.114 "thread": "nvmf_tgt_poll_group_000", 00:15:58.114 "listen_address": { 00:15:58.114 "trtype": "RDMA", 00:15:58.114 "adrfam": "IPv4", 00:15:58.114 "traddr": "192.168.100.8", 00:15:58.114 "trsvcid": "4420" 00:15:58.114 }, 00:15:58.114 "peer_address": { 00:15:58.114 "trtype": "RDMA", 00:15:58.114 "adrfam": "IPv4", 00:15:58.114 "traddr": "192.168.100.8", 00:15:58.114 "trsvcid": "52971" 00:15:58.114 }, 00:15:58.114 "auth": { 00:15:58.114 "state": "completed", 00:15:58.114 "digest": "sha384", 00:15:58.114 "dhgroup": "ffdhe8192" 00:15:58.114 } 00:15:58.114 } 00:15:58.114 ]' 00:15:58.114 23:41:46 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.114 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.114 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.114 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.114 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.373 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.373 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.373 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.373 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:15:58.940 23:41:47 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.199 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.199 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.456 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:59.456 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.456 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.457 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.714 00:15:59.714 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.714 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.714 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.972 { 00:15:59.972 "cntlid": 93, 00:15:59.972 "qid": 0, 00:15:59.972 "state": "enabled", 00:15:59.972 "thread": "nvmf_tgt_poll_group_000", 00:15:59.972 "listen_address": { 00:15:59.972 "trtype": "RDMA", 00:15:59.972 "adrfam": "IPv4", 00:15:59.972 "traddr": "192.168.100.8", 00:15:59.972 "trsvcid": "4420" 00:15:59.972 }, 00:15:59.972 "peer_address": { 00:15:59.972 "trtype": "RDMA", 00:15:59.972 "adrfam": "IPv4", 00:15:59.972 "traddr": "192.168.100.8", 00:15:59.972 "trsvcid": "54030" 00:15:59.972 }, 00:15:59.972 "auth": { 00:15:59.972 "state": "completed", 00:15:59.972 "digest": "sha384", 00:15:59.972 "dhgroup": "ffdhe8192" 00:15:59.972 } 00:15:59.972 } 00:15:59.972 ]' 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.972 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.231 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.231 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.231 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.231 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.231 23:41:48 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.231 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.163 23:41:49 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.163 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.728 00:16:01.728 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.728 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.728 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.986 { 00:16:01.986 "cntlid": 95, 00:16:01.986 "qid": 0, 00:16:01.986 "state": "enabled", 00:16:01.986 "thread": "nvmf_tgt_poll_group_000", 00:16:01.986 "listen_address": { 00:16:01.986 "trtype": "RDMA", 00:16:01.986 "adrfam": "IPv4", 00:16:01.986 "traddr": "192.168.100.8", 00:16:01.986 "trsvcid": "4420" 00:16:01.986 }, 00:16:01.986 "peer_address": { 00:16:01.986 "trtype": "RDMA", 00:16:01.986 "adrfam": "IPv4", 00:16:01.986 "traddr": "192.168.100.8", 00:16:01.986 "trsvcid": "46659" 00:16:01.986 }, 00:16:01.986 "auth": { 00:16:01.986 "state": "completed", 00:16:01.986 "digest": "sha384", 00:16:01.986 "dhgroup": "ffdhe8192" 00:16:01.986 } 00:16:01.986 } 00:16:01.986 ]' 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.986 23:41:50 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.242 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:02.807 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.807 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:02.807 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:02.807 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.066 23:41:51 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.325 00:16:03.325 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.325 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.325 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.584 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.584 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.584 23:41:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:03.584 23:41:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.585 { 00:16:03.585 "cntlid": 97, 00:16:03.585 "qid": 0, 00:16:03.585 "state": "enabled", 00:16:03.585 "thread": "nvmf_tgt_poll_group_000", 00:16:03.585 "listen_address": { 00:16:03.585 "trtype": "RDMA", 00:16:03.585 "adrfam": "IPv4", 00:16:03.585 "traddr": "192.168.100.8", 00:16:03.585 "trsvcid": "4420" 00:16:03.585 }, 00:16:03.585 "peer_address": { 00:16:03.585 "trtype": "RDMA", 00:16:03.585 "adrfam": "IPv4", 00:16:03.585 "traddr": "192.168.100.8", 00:16:03.585 "trsvcid": "33208" 00:16:03.585 }, 00:16:03.585 "auth": { 00:16:03.585 "state": "completed", 00:16:03.585 "digest": "sha512", 00:16:03.585 "dhgroup": "null" 00:16:03.585 } 00:16:03.585 } 00:16:03.585 ]' 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.585 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.843 23:41:52 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:04.412 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.670 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.928 00:16:04.928 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:04.928 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:04.928 23:41:53 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.187 { 00:16:05.187 "cntlid": 99, 00:16:05.187 "qid": 0, 00:16:05.187 "state": "enabled", 00:16:05.187 "thread": "nvmf_tgt_poll_group_000", 00:16:05.187 "listen_address": { 00:16:05.187 "trtype": "RDMA", 00:16:05.187 "adrfam": "IPv4", 00:16:05.187 "traddr": "192.168.100.8", 00:16:05.187 "trsvcid": "4420" 00:16:05.187 }, 00:16:05.187 "peer_address": { 00:16:05.187 "trtype": "RDMA", 00:16:05.187 "adrfam": "IPv4", 00:16:05.187 "traddr": "192.168.100.8", 00:16:05.187 "trsvcid": "45265" 00:16:05.187 }, 00:16:05.187 "auth": { 00:16:05.187 "state": "completed", 00:16:05.187 "digest": "sha512", 00:16:05.187 "dhgroup": "null" 00:16:05.187 } 00:16:05.187 } 00:16:05.187 ]' 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.187 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.445 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:06.013 23:41:54 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.272 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.531 00:16:06.531 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.531 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.531 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:06.788 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.788 { 00:16:06.788 "cntlid": 101, 00:16:06.788 "qid": 0, 00:16:06.788 "state": "enabled", 00:16:06.788 "thread": "nvmf_tgt_poll_group_000", 00:16:06.788 "listen_address": { 00:16:06.788 "trtype": "RDMA", 00:16:06.788 "adrfam": "IPv4", 00:16:06.788 "traddr": "192.168.100.8", 00:16:06.788 "trsvcid": "4420" 00:16:06.789 }, 00:16:06.789 "peer_address": { 00:16:06.789 "trtype": "RDMA", 00:16:06.789 "adrfam": "IPv4", 00:16:06.789 "traddr": "192.168.100.8", 00:16:06.789 "trsvcid": "46336" 00:16:06.789 }, 00:16:06.789 "auth": { 00:16:06.789 "state": "completed", 00:16:06.789 "digest": "sha512", 00:16:06.789 "dhgroup": "null" 00:16:06.789 } 00:16:06.789 } 00:16:06.789 ]' 00:16:06.789 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.789 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.789 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.789 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:06.789 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.046 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.046 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.046 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.046 23:41:55 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:07.613 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.870 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.129 23:41:56 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.386 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:08.386 23:41:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.387 23:41:57 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:08.387 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.387 { 00:16:08.387 "cntlid": 103, 00:16:08.387 "qid": 0, 00:16:08.387 "state": "enabled", 00:16:08.387 "thread": "nvmf_tgt_poll_group_000", 00:16:08.387 "listen_address": { 00:16:08.387 "trtype": "RDMA", 00:16:08.387 "adrfam": "IPv4", 00:16:08.387 "traddr": "192.168.100.8", 00:16:08.387 "trsvcid": "4420" 00:16:08.387 }, 00:16:08.387 "peer_address": { 00:16:08.387 "trtype": "RDMA", 00:16:08.387 "adrfam": "IPv4", 00:16:08.387 "traddr": "192.168.100.8", 00:16:08.387 "trsvcid": "36279" 00:16:08.387 }, 00:16:08.387 "auth": { 00:16:08.387 "state": "completed", 00:16:08.387 "digest": "sha512", 00:16:08.387 "dhgroup": "null" 00:16:08.387 } 00:16:08.387 } 00:16:08.387 ]' 00:16:08.387 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.387 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.387 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.644 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.644 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.644 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.644 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.644 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.902 23:41:57 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:09.523 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.524 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.808 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.076 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:10.076 23:41:58 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 23:41:59 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:10.076 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.076 { 00:16:10.076 "cntlid": 105, 00:16:10.076 "qid": 0, 00:16:10.076 "state": "enabled", 00:16:10.076 "thread": "nvmf_tgt_poll_group_000", 00:16:10.076 "listen_address": { 00:16:10.076 "trtype": "RDMA", 00:16:10.076 "adrfam": "IPv4", 00:16:10.076 "traddr": "192.168.100.8", 00:16:10.076 "trsvcid": "4420" 00:16:10.076 }, 00:16:10.076 "peer_address": { 00:16:10.076 "trtype": "RDMA", 00:16:10.076 "adrfam": "IPv4", 00:16:10.076 "traddr": "192.168.100.8", 00:16:10.076 "trsvcid": "50626" 00:16:10.076 }, 00:16:10.076 "auth": { 00:16:10.076 "state": "completed", 00:16:10.076 "digest": "sha512", 00:16:10.076 "dhgroup": "ffdhe2048" 00:16:10.076 } 00:16:10.076 } 00:16:10.076 ]' 00:16:10.076 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.076 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.076 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.333 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.333 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.333 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.333 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.333 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.591 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:11.157 23:41:59 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.157 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.416 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.674 00:16:11.674 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.674 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.674 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.932 { 00:16:11.932 "cntlid": 107, 00:16:11.932 "qid": 0, 00:16:11.932 "state": "enabled", 00:16:11.932 "thread": "nvmf_tgt_poll_group_000", 00:16:11.932 "listen_address": { 00:16:11.932 "trtype": "RDMA", 00:16:11.932 "adrfam": "IPv4", 00:16:11.932 "traddr": "192.168.100.8", 00:16:11.932 "trsvcid": "4420" 00:16:11.932 }, 00:16:11.932 "peer_address": { 00:16:11.932 "trtype": "RDMA", 00:16:11.932 "adrfam": "IPv4", 00:16:11.932 "traddr": "192.168.100.8", 00:16:11.932 "trsvcid": "41776" 00:16:11.932 }, 00:16:11.932 "auth": { 00:16:11.932 "state": "completed", 00:16:11.932 "digest": "sha512", 00:16:11.932 "dhgroup": "ffdhe2048" 00:16:11.932 } 00:16:11.932 } 00:16:11.932 ]' 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.932 23:42:00 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.191 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:12.756 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.757 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.018 23:42:01 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.276 00:16:13.276 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.276 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.276 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.533 { 00:16:13.533 "cntlid": 109, 00:16:13.533 "qid": 0, 00:16:13.533 "state": "enabled", 00:16:13.533 "thread": "nvmf_tgt_poll_group_000", 00:16:13.533 "listen_address": { 00:16:13.533 "trtype": "RDMA", 00:16:13.533 "adrfam": "IPv4", 00:16:13.533 "traddr": "192.168.100.8", 00:16:13.533 "trsvcid": "4420" 00:16:13.533 }, 00:16:13.533 "peer_address": { 00:16:13.533 "trtype": "RDMA", 00:16:13.533 "adrfam": "IPv4", 00:16:13.533 "traddr": "192.168.100.8", 00:16:13.533 "trsvcid": "55691" 00:16:13.533 }, 00:16:13.533 "auth": { 00:16:13.533 "state": "completed", 00:16:13.533 "digest": "sha512", 00:16:13.533 "dhgroup": "ffdhe2048" 00:16:13.533 } 00:16:13.533 } 00:16:13.533 ]' 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.533 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.534 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.534 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.534 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.791 23:42:02 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:14.357 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.614 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.615 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.872 00:16:14.872 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.872 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.872 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.130 { 00:16:15.130 "cntlid": 111, 00:16:15.130 "qid": 0, 00:16:15.130 "state": "enabled", 00:16:15.130 "thread": "nvmf_tgt_poll_group_000", 00:16:15.130 "listen_address": { 00:16:15.130 "trtype": "RDMA", 00:16:15.130 "adrfam": "IPv4", 00:16:15.130 "traddr": "192.168.100.8", 00:16:15.130 "trsvcid": "4420" 00:16:15.130 }, 00:16:15.130 "peer_address": { 00:16:15.130 "trtype": "RDMA", 00:16:15.130 "adrfam": "IPv4", 00:16:15.130 "traddr": "192.168.100.8", 00:16:15.130 "trsvcid": "49010" 00:16:15.130 }, 00:16:15.130 "auth": { 00:16:15.130 "state": "completed", 00:16:15.130 "digest": "sha512", 00:16:15.130 "dhgroup": "ffdhe2048" 00:16:15.130 } 00:16:15.130 } 00:16:15.130 ]' 00:16:15.130 23:42:03 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.130 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.130 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.130 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.130 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.388 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.388 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.388 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.388 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:15.954 23:42:04 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.213 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.471 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.471 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.471 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.471 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.471 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.730 { 00:16:16.730 "cntlid": 113, 00:16:16.730 "qid": 0, 00:16:16.730 "state": "enabled", 00:16:16.730 "thread": "nvmf_tgt_poll_group_000", 00:16:16.730 "listen_address": { 00:16:16.730 "trtype": "RDMA", 00:16:16.730 "adrfam": "IPv4", 00:16:16.730 "traddr": "192.168.100.8", 00:16:16.730 "trsvcid": "4420" 00:16:16.730 }, 00:16:16.730 "peer_address": { 00:16:16.730 "trtype": "RDMA", 00:16:16.730 "adrfam": "IPv4", 00:16:16.730 "traddr": "192.168.100.8", 00:16:16.730 "trsvcid": "37811" 00:16:16.730 }, 00:16:16.730 "auth": { 00:16:16.730 "state": "completed", 00:16:16.730 "digest": "sha512", 00:16:16.730 "dhgroup": "ffdhe3072" 00:16:16.730 } 00:16:16.730 } 00:16:16.730 ]' 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.730 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.989 23:42:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.925 23:42:06 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.183 00:16:18.183 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.183 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.183 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.441 { 00:16:18.441 "cntlid": 115, 00:16:18.441 "qid": 0, 00:16:18.441 "state": "enabled", 00:16:18.441 "thread": "nvmf_tgt_poll_group_000", 00:16:18.441 "listen_address": { 00:16:18.441 "trtype": "RDMA", 00:16:18.441 "adrfam": "IPv4", 00:16:18.441 "traddr": "192.168.100.8", 00:16:18.441 "trsvcid": "4420" 00:16:18.441 }, 00:16:18.441 "peer_address": { 00:16:18.441 "trtype": "RDMA", 00:16:18.441 "adrfam": "IPv4", 00:16:18.441 "traddr": "192.168.100.8", 00:16:18.441 "trsvcid": "38056" 00:16:18.441 }, 00:16:18.441 "auth": { 00:16:18.441 "state": "completed", 00:16:18.441 "digest": "sha512", 00:16:18.441 "dhgroup": "ffdhe3072" 00:16:18.441 } 00:16:18.441 } 00:16:18.441 ]' 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.441 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.698 23:42:07 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:19.261 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.518 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.518 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.776 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.033 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.033 { 00:16:20.033 "cntlid": 117, 00:16:20.033 "qid": 0, 00:16:20.033 "state": "enabled", 00:16:20.033 "thread": "nvmf_tgt_poll_group_000", 00:16:20.033 "listen_address": { 00:16:20.033 "trtype": "RDMA", 00:16:20.033 "adrfam": "IPv4", 00:16:20.033 "traddr": "192.168.100.8", 00:16:20.033 "trsvcid": "4420" 00:16:20.033 }, 00:16:20.033 "peer_address": { 00:16:20.033 "trtype": "RDMA", 00:16:20.033 "adrfam": "IPv4", 00:16:20.033 "traddr": "192.168.100.8", 00:16:20.033 "trsvcid": "48396" 00:16:20.033 }, 00:16:20.033 "auth": { 00:16:20.033 "state": "completed", 00:16:20.033 "digest": "sha512", 00:16:20.033 "dhgroup": "ffdhe3072" 00:16:20.033 } 00:16:20.033 } 00:16:20.033 ]' 00:16:20.033 23:42:08 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.292 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.549 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:21.114 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.114 23:42:09 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:21.114 23:42:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.114 23:42:09 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.114 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.114 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.114 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.114 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.372 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.630 00:16:21.630 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.630 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.630 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.889 { 00:16:21.889 "cntlid": 119, 00:16:21.889 "qid": 0, 00:16:21.889 "state": "enabled", 00:16:21.889 "thread": "nvmf_tgt_poll_group_000", 00:16:21.889 "listen_address": { 00:16:21.889 "trtype": "RDMA", 00:16:21.889 "adrfam": "IPv4", 00:16:21.889 "traddr": "192.168.100.8", 00:16:21.889 "trsvcid": "4420" 00:16:21.889 }, 00:16:21.889 "peer_address": { 00:16:21.889 "trtype": "RDMA", 00:16:21.889 "adrfam": "IPv4", 00:16:21.889 "traddr": "192.168.100.8", 00:16:21.889 "trsvcid": "44192" 00:16:21.889 }, 00:16:21.889 "auth": { 00:16:21.889 "state": "completed", 00:16:21.889 "digest": "sha512", 00:16:21.889 "dhgroup": "ffdhe3072" 00:16:21.889 } 00:16:21.889 } 00:16:21.889 ]' 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.889 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.146 23:42:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.712 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.971 23:42:11 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.230 00:16:23.230 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.230 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.230 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.489 { 00:16:23.489 "cntlid": 121, 00:16:23.489 "qid": 0, 00:16:23.489 "state": "enabled", 00:16:23.489 "thread": "nvmf_tgt_poll_group_000", 00:16:23.489 "listen_address": { 00:16:23.489 "trtype": "RDMA", 00:16:23.489 "adrfam": "IPv4", 00:16:23.489 "traddr": "192.168.100.8", 00:16:23.489 "trsvcid": "4420" 00:16:23.489 }, 00:16:23.489 "peer_address": { 00:16:23.489 "trtype": "RDMA", 00:16:23.489 "adrfam": "IPv4", 00:16:23.489 "traddr": "192.168.100.8", 00:16:23.489 "trsvcid": "41830" 00:16:23.489 }, 00:16:23.489 "auth": { 00:16:23.489 "state": "completed", 00:16:23.489 "digest": "sha512", 00:16:23.489 "dhgroup": "ffdhe4096" 00:16:23.489 } 00:16:23.489 } 00:16:23.489 ]' 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.489 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.747 23:42:12 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:24.313 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.571 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.829 00:16:24.829 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.829 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.829 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.088 { 00:16:25.088 "cntlid": 123, 00:16:25.088 "qid": 0, 00:16:25.088 "state": "enabled", 00:16:25.088 "thread": "nvmf_tgt_poll_group_000", 00:16:25.088 "listen_address": { 00:16:25.088 "trtype": "RDMA", 00:16:25.088 "adrfam": "IPv4", 00:16:25.088 "traddr": "192.168.100.8", 00:16:25.088 "trsvcid": "4420" 00:16:25.088 }, 00:16:25.088 "peer_address": { 00:16:25.088 "trtype": "RDMA", 00:16:25.088 "adrfam": "IPv4", 00:16:25.088 "traddr": "192.168.100.8", 00:16:25.088 "trsvcid": "58017" 00:16:25.088 }, 00:16:25.088 "auth": { 00:16:25.088 "state": "completed", 00:16:25.088 "digest": "sha512", 00:16:25.088 "dhgroup": "ffdhe4096" 00:16:25.088 } 00:16:25.088 } 00:16:25.088 ]' 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.088 23:42:13 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.088 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:25.088 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.347 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.347 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.347 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.347 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:25.913 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.173 23:42:14 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.173 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.433 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.433 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.433 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.433 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.691 { 00:16:26.691 "cntlid": 125, 00:16:26.691 "qid": 0, 00:16:26.691 "state": "enabled", 00:16:26.691 "thread": "nvmf_tgt_poll_group_000", 00:16:26.691 "listen_address": { 00:16:26.691 "trtype": "RDMA", 00:16:26.691 "adrfam": "IPv4", 00:16:26.691 "traddr": "192.168.100.8", 00:16:26.691 "trsvcid": "4420" 00:16:26.691 }, 00:16:26.691 "peer_address": { 00:16:26.691 "trtype": "RDMA", 00:16:26.691 "adrfam": "IPv4", 00:16:26.691 "traddr": "192.168.100.8", 00:16:26.691 "trsvcid": "54283" 00:16:26.691 }, 00:16:26.691 "auth": { 00:16:26.691 "state": "completed", 00:16:26.691 "digest": "sha512", 00:16:26.691 "dhgroup": "ffdhe4096" 00:16:26.691 } 00:16:26.691 } 00:16:26.691 ]' 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.691 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.949 23:42:15 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:27.883 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.142 23:42:16 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.142 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.143 23:42:16 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.143 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.401 { 00:16:28.401 "cntlid": 127, 00:16:28.401 "qid": 0, 00:16:28.401 "state": "enabled", 00:16:28.401 "thread": "nvmf_tgt_poll_group_000", 00:16:28.401 "listen_address": { 00:16:28.401 "trtype": "RDMA", 00:16:28.401 "adrfam": "IPv4", 00:16:28.401 "traddr": "192.168.100.8", 00:16:28.401 "trsvcid": "4420" 00:16:28.401 }, 00:16:28.401 "peer_address": { 00:16:28.401 "trtype": "RDMA", 00:16:28.401 "adrfam": "IPv4", 00:16:28.401 "traddr": "192.168.100.8", 00:16:28.401 "trsvcid": "45057" 00:16:28.401 }, 00:16:28.401 "auth": { 00:16:28.401 "state": "completed", 00:16:28.401 "digest": "sha512", 00:16:28.401 "dhgroup": "ffdhe4096" 00:16:28.401 } 00:16:28.401 } 00:16:28.401 ]' 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.401 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.659 23:42:17 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:29.227 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.485 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.744 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.003 00:16:30.003 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.003 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.003 23:42:18 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.262 { 00:16:30.262 "cntlid": 129, 00:16:30.262 "qid": 0, 00:16:30.262 "state": "enabled", 00:16:30.262 "thread": "nvmf_tgt_poll_group_000", 00:16:30.262 "listen_address": { 00:16:30.262 "trtype": "RDMA", 00:16:30.262 "adrfam": "IPv4", 00:16:30.262 "traddr": "192.168.100.8", 00:16:30.262 "trsvcid": "4420" 00:16:30.262 }, 00:16:30.262 "peer_address": { 00:16:30.262 "trtype": "RDMA", 00:16:30.262 "adrfam": "IPv4", 00:16:30.262 "traddr": "192.168.100.8", 00:16:30.262 "trsvcid": "58287" 00:16:30.262 }, 00:16:30.262 "auth": { 00:16:30.262 "state": "completed", 00:16:30.262 "digest": "sha512", 00:16:30.262 "dhgroup": "ffdhe6144" 00:16:30.262 } 00:16:30.262 } 00:16:30.262 ]' 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.262 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.521 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:31.088 23:42:19 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.347 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.915 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.915 { 00:16:31.915 "cntlid": 131, 00:16:31.915 "qid": 0, 00:16:31.915 "state": "enabled", 00:16:31.915 "thread": "nvmf_tgt_poll_group_000", 00:16:31.915 "listen_address": { 00:16:31.915 "trtype": "RDMA", 00:16:31.915 "adrfam": "IPv4", 00:16:31.915 "traddr": "192.168.100.8", 00:16:31.915 "trsvcid": "4420" 00:16:31.915 }, 00:16:31.915 "peer_address": { 00:16:31.915 "trtype": "RDMA", 00:16:31.915 "adrfam": "IPv4", 00:16:31.915 "traddr": "192.168.100.8", 00:16:31.915 "trsvcid": "33727" 00:16:31.915 }, 00:16:31.915 "auth": { 00:16:31.915 "state": "completed", 00:16:31.915 "digest": "sha512", 00:16:31.915 "dhgroup": "ffdhe6144" 00:16:31.915 } 00:16:31.915 } 00:16:31.915 ]' 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.915 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.174 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.174 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.174 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.174 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.174 23:42:20 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.174 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.110 23:42:21 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.110 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.676 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.676 { 00:16:33.676 "cntlid": 133, 00:16:33.676 "qid": 0, 00:16:33.676 "state": "enabled", 00:16:33.676 "thread": "nvmf_tgt_poll_group_000", 00:16:33.676 "listen_address": { 00:16:33.676 "trtype": "RDMA", 00:16:33.676 "adrfam": "IPv4", 00:16:33.676 "traddr": "192.168.100.8", 00:16:33.676 "trsvcid": "4420" 00:16:33.676 }, 00:16:33.676 "peer_address": { 00:16:33.676 "trtype": "RDMA", 00:16:33.676 "adrfam": "IPv4", 00:16:33.676 "traddr": "192.168.100.8", 00:16:33.676 "trsvcid": "45366" 00:16:33.676 }, 00:16:33.676 "auth": { 00:16:33.676 "state": "completed", 00:16:33.676 "digest": "sha512", 00:16:33.676 "dhgroup": "ffdhe6144" 00:16:33.676 } 00:16:33.676 } 00:16:33.676 ]' 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.676 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.935 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.935 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.935 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.935 23:42:22 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:34.500 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.758 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.021 23:42:23 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.371 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.371 { 00:16:35.371 "cntlid": 135, 00:16:35.371 "qid": 0, 00:16:35.371 "state": "enabled", 00:16:35.371 "thread": "nvmf_tgt_poll_group_000", 00:16:35.371 "listen_address": { 00:16:35.371 "trtype": "RDMA", 00:16:35.371 "adrfam": "IPv4", 00:16:35.371 "traddr": "192.168.100.8", 00:16:35.371 "trsvcid": "4420" 00:16:35.371 }, 00:16:35.371 "peer_address": { 00:16:35.371 "trtype": "RDMA", 00:16:35.371 "adrfam": "IPv4", 00:16:35.371 "traddr": "192.168.100.8", 00:16:35.371 "trsvcid": "44278" 00:16:35.371 }, 00:16:35.371 "auth": { 00:16:35.371 "state": "completed", 00:16:35.371 "digest": "sha512", 00:16:35.371 "dhgroup": "ffdhe6144" 00:16:35.371 } 00:16:35.371 } 00:16:35.371 ]' 00:16:35.371 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.654 23:42:24 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.589 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.590 23:42:25 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.156 00:16:37.156 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.156 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.156 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:37.415 { 00:16:37.415 "cntlid": 137, 00:16:37.415 "qid": 0, 00:16:37.415 "state": "enabled", 00:16:37.415 "thread": "nvmf_tgt_poll_group_000", 00:16:37.415 "listen_address": { 00:16:37.415 "trtype": "RDMA", 00:16:37.415 "adrfam": "IPv4", 00:16:37.415 "traddr": "192.168.100.8", 00:16:37.415 "trsvcid": "4420" 00:16:37.415 }, 00:16:37.415 "peer_address": { 00:16:37.415 "trtype": "RDMA", 00:16:37.415 "adrfam": "IPv4", 00:16:37.415 "traddr": "192.168.100.8", 00:16:37.415 "trsvcid": "58755" 00:16:37.415 }, 00:16:37.415 "auth": { 00:16:37.415 "state": "completed", 00:16:37.415 "digest": "sha512", 00:16:37.415 "dhgroup": "ffdhe8192" 00:16:37.415 } 00:16:37.415 } 00:16:37.415 ]' 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.415 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.416 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.416 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.674 23:42:26 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.241 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.500 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.067 00:16:39.067 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.067 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.067 23:42:27 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.326 { 00:16:39.326 "cntlid": 139, 00:16:39.326 "qid": 0, 00:16:39.326 "state": "enabled", 00:16:39.326 "thread": "nvmf_tgt_poll_group_000", 00:16:39.326 "listen_address": { 00:16:39.326 "trtype": "RDMA", 00:16:39.326 "adrfam": "IPv4", 00:16:39.326 "traddr": "192.168.100.8", 00:16:39.326 "trsvcid": "4420" 00:16:39.326 }, 00:16:39.326 "peer_address": { 00:16:39.326 "trtype": "RDMA", 00:16:39.326 "adrfam": "IPv4", 00:16:39.326 "traddr": "192.168.100.8", 00:16:39.326 "trsvcid": "37575" 00:16:39.326 }, 00:16:39.326 "auth": { 00:16:39.326 "state": "completed", 00:16:39.326 "digest": "sha512", 00:16:39.326 "dhgroup": "ffdhe8192" 00:16:39.326 } 00:16:39.326 } 00:16:39.326 ]' 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.326 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.584 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjdjNTZmMDlkMmU4OGQ2NTEyNTc5NzMwZDlhZTcwZjI0Xm8o: --dhchap-ctrl-secret DHHC-1:02:MmNlOTRhMDczMjAxNmNhNzdhODJlYWQ5NTY0NzJlZTFiZDJmOTA0NjY5YzUyZjIwiJVmZA==: 00:16:40.150 23:42:28 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.150 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:40.407 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.408 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.972 00:16:40.972 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.972 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.972 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.228 { 00:16:41.228 "cntlid": 141, 00:16:41.228 "qid": 0, 00:16:41.228 "state": "enabled", 00:16:41.228 "thread": "nvmf_tgt_poll_group_000", 00:16:41.228 "listen_address": { 00:16:41.228 "trtype": "RDMA", 00:16:41.228 "adrfam": "IPv4", 00:16:41.228 "traddr": "192.168.100.8", 00:16:41.228 "trsvcid": "4420" 00:16:41.228 }, 00:16:41.228 "peer_address": { 00:16:41.228 "trtype": "RDMA", 00:16:41.228 "adrfam": "IPv4", 00:16:41.228 "traddr": "192.168.100.8", 00:16:41.228 "trsvcid": "54981" 00:16:41.228 }, 00:16:41.228 "auth": { 00:16:41.228 "state": "completed", 00:16:41.228 "digest": "sha512", 00:16:41.228 "dhgroup": "ffdhe8192" 00:16:41.228 } 00:16:41.228 } 00:16:41.228 ]' 00:16:41.228 23:42:29 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.228 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.485 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ODdiZWU3OTk3NjYzNWI3NTczMDk4NGEwNWNjM2Q2M2YyN2VkOWFiY2Q4OWI0NjBh2owSGg==: --dhchap-ctrl-secret DHHC-1:01:NjM5OGRlOTVjMWQ2OTZkNmE1Zjg1YWRlM2ZmY2M4NzckwCgl: 00:16:42.052 23:42:30 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.052 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.310 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:42.876 00:16:42.876 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.876 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.876 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.135 { 00:16:43.135 "cntlid": 143, 00:16:43.135 "qid": 0, 00:16:43.135 "state": "enabled", 00:16:43.135 "thread": "nvmf_tgt_poll_group_000", 00:16:43.135 "listen_address": { 00:16:43.135 "trtype": "RDMA", 00:16:43.135 "adrfam": "IPv4", 00:16:43.135 "traddr": "192.168.100.8", 00:16:43.135 "trsvcid": "4420" 00:16:43.135 }, 00:16:43.135 "peer_address": { 00:16:43.135 "trtype": "RDMA", 00:16:43.135 "adrfam": "IPv4", 00:16:43.135 "traddr": "192.168.100.8", 00:16:43.135 "trsvcid": "36927" 00:16:43.135 }, 00:16:43.135 "auth": { 00:16:43.135 "state": "completed", 00:16:43.135 "digest": "sha512", 00:16:43.135 "dhgroup": "ffdhe8192" 00:16:43.135 } 00:16:43.135 } 00:16:43.135 ]' 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:43.135 23:42:31 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.135 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.135 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.135 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.392 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.957 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:43.957 23:42:32 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.215 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:44.780 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:44.780 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.037 { 00:16:45.037 "cntlid": 145, 00:16:45.037 "qid": 0, 00:16:45.037 "state": "enabled", 00:16:45.037 "thread": "nvmf_tgt_poll_group_000", 00:16:45.037 "listen_address": { 00:16:45.037 "trtype": "RDMA", 00:16:45.037 "adrfam": "IPv4", 00:16:45.037 "traddr": "192.168.100.8", 00:16:45.037 "trsvcid": "4420" 00:16:45.037 }, 00:16:45.037 "peer_address": { 00:16:45.037 "trtype": "RDMA", 00:16:45.037 "adrfam": "IPv4", 00:16:45.037 "traddr": "192.168.100.8", 00:16:45.037 "trsvcid": "39064" 00:16:45.037 }, 00:16:45.037 "auth": { 00:16:45.037 "state": "completed", 00:16:45.037 "digest": "sha512", 00:16:45.037 "dhgroup": "ffdhe8192" 00:16:45.037 } 00:16:45.037 } 00:16:45.037 ]' 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.037 23:42:33 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.295 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NjJmN2Y5MGI3ZGM3YjdkNzIzM2VhMTczYjQ5MGM2ZWRhMTNiNWY0YWE0MzJjYTRiNdEgAQ==: --dhchap-ctrl-secret DHHC-1:03:NDVkYjJhZWNkODcyNzYyNWIxOGM1OWFjMjZkMWNhNjA2NzU1ODQwNWEwNGM4NTBmMjE5NTIwYjc4ZDg0NzMxZBXSVCE=: 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:45.862 23:42:34 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:17.934 request: 00:17:17.934 { 00:17:17.934 "name": "nvme0", 00:17:17.934 "trtype": "rdma", 00:17:17.934 "traddr": "192.168.100.8", 00:17:17.934 "adrfam": "ipv4", 00:17:17.934 "trsvcid": "4420", 00:17:17.934 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:17.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:17.934 "prchk_reftag": false, 00:17:17.934 "prchk_guard": false, 00:17:17.934 "hdgst": false, 00:17:17.934 "ddgst": false, 00:17:17.934 "dhchap_key": "key2", 00:17:17.934 "method": "bdev_nvme_attach_controller", 00:17:17.934 "req_id": 1 00:17:17.934 } 00:17:17.934 Got JSON-RPC error response 00:17:17.934 response: 00:17:17.934 { 00:17:17.934 "code": -5, 00:17:17.934 "message": "Input/output error" 00:17:17.934 } 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.934 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:17.934 request: 00:17:17.934 { 00:17:17.934 "name": "nvme0", 00:17:17.934 "trtype": "rdma", 00:17:17.934 "traddr": "192.168.100.8", 00:17:17.934 "adrfam": "ipv4", 00:17:17.935 "trsvcid": "4420", 00:17:17.935 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:17.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:17.935 "prchk_reftag": false, 00:17:17.935 "prchk_guard": false, 00:17:17.935 "hdgst": false, 00:17:17.935 "ddgst": false, 00:17:17.935 "dhchap_key": "key1", 00:17:17.935 "dhchap_ctrlr_key": "ckey2", 00:17:17.935 "method": "bdev_nvme_attach_controller", 00:17:17.935 "req_id": 1 00:17:17.935 } 00:17:17.935 Got JSON-RPC error response 00:17:17.935 response: 00:17:17.935 { 00:17:17.935 "code": -5, 00:17:17.935 "message": "Input/output error" 00:17:17.935 } 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.935 23:43:05 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.042 request: 00:17:50.042 { 00:17:50.042 "name": "nvme0", 00:17:50.042 "trtype": "rdma", 00:17:50.042 "traddr": "192.168.100.8", 00:17:50.042 "adrfam": "ipv4", 00:17:50.042 "trsvcid": "4420", 00:17:50.042 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:50.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:17:50.042 "prchk_reftag": false, 00:17:50.042 "prchk_guard": false, 00:17:50.042 "hdgst": false, 00:17:50.042 "ddgst": false, 00:17:50.042 "dhchap_key": "key1", 00:17:50.042 "dhchap_ctrlr_key": "ckey1", 00:17:50.042 "method": "bdev_nvme_attach_controller", 00:17:50.042 "req_id": 1 00:17:50.042 } 00:17:50.042 Got JSON-RPC error response 00:17:50.042 response: 00:17:50.042 { 00:17:50.042 "code": -5, 00:17:50.042 "message": "Input/output error" 00:17:50.042 } 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1445076 ']' 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1445076' 00:17:50.042 killing process with pid 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1445076 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1477845 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1477845 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1477845 ']' 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:50.042 23:43:36 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1477845 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@823 -- # '[' -z 1477845 ']' 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@828 -- # local max_retries=100 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@832 -- # xtrace_disable 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@856 -- # return 0 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.042 23:43:37 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.042 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:50.042 { 00:17:50.042 "cntlid": 1, 00:17:50.042 "qid": 0, 00:17:50.042 "state": "enabled", 00:17:50.042 "thread": "nvmf_tgt_poll_group_000", 00:17:50.042 "listen_address": { 00:17:50.042 "trtype": "RDMA", 00:17:50.042 "adrfam": "IPv4", 00:17:50.042 "traddr": "192.168.100.8", 00:17:50.042 "trsvcid": "4420" 00:17:50.042 }, 00:17:50.042 "peer_address": { 00:17:50.042 "trtype": "RDMA", 00:17:50.042 "adrfam": "IPv4", 00:17:50.042 "traddr": "192.168.100.8", 00:17:50.042 "trsvcid": "48911" 00:17:50.042 }, 00:17:50.042 "auth": { 00:17:50.042 "state": "completed", 00:17:50.042 "digest": "sha512", 00:17:50.042 "dhgroup": "ffdhe8192" 00:17:50.042 } 00:17:50.042 } 00:17:50.042 ]' 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.042 23:43:38 nvmf_rdma.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:Yjc2YTczZDhkNGVkYjU3OGQ5ZmJmYTNjMzA4MzkwNzA4MDU0YjNkOWMxNDQxYzQ0YjY1OTYzYWFjMDgwOWEyZaDSyww=: 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:50.607 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.864 23:43:39 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.934 request: 00:18:22.934 { 00:18:22.934 "name": "nvme0", 00:18:22.934 "trtype": "rdma", 00:18:22.934 "traddr": "192.168.100.8", 00:18:22.934 "adrfam": "ipv4", 00:18:22.934 "trsvcid": "4420", 00:18:22.934 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:22.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:22.934 "prchk_reftag": false, 00:18:22.934 "prchk_guard": false, 00:18:22.934 "hdgst": false, 00:18:22.934 "ddgst": false, 00:18:22.934 "dhchap_key": "key3", 00:18:22.934 "method": "bdev_nvme_attach_controller", 00:18:22.934 "req_id": 1 00:18:22.934 } 00:18:22.934 Got JSON-RPC error response 00:18:22.934 response: 00:18:22.934 { 00:18:22.934 "code": -5, 00:18:22.934 "message": "Input/output error" 00:18:22.934 } 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:22.934 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:22.935 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:22.935 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:22.935 23:44:10 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.935 23:44:10 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.075 request: 00:18:55.075 { 00:18:55.075 "name": "nvme0", 00:18:55.075 "trtype": "rdma", 00:18:55.075 "traddr": "192.168.100.8", 00:18:55.075 "adrfam": "ipv4", 00:18:55.075 "trsvcid": "4420", 00:18:55.075 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:55.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:55.075 "prchk_reftag": false, 00:18:55.075 "prchk_guard": false, 00:18:55.075 "hdgst": false, 00:18:55.075 "ddgst": false, 00:18:55.075 "dhchap_key": "key3", 00:18:55.075 "method": "bdev_nvme_attach_controller", 00:18:55.075 "req_id": 1 00:18:55.075 } 00:18:55.075 Got JSON-RPC error response 00:18:55.075 response: 00:18:55.075 { 00:18:55.075 "code": -5, 00:18:55.075 "message": "Input/output error" 00:18:55.075 } 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@642 -- # local es=0 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@644 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@630 -- # local arg=hostrpc 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # type -t hostrpc 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:55.075 request: 00:18:55.075 { 00:18:55.075 "name": "nvme0", 00:18:55.075 "trtype": "rdma", 00:18:55.075 "traddr": "192.168.100.8", 00:18:55.075 "adrfam": "ipv4", 00:18:55.075 "trsvcid": "4420", 00:18:55.075 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:55.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:55.075 "prchk_reftag": false, 00:18:55.075 "prchk_guard": false, 00:18:55.075 "hdgst": false, 00:18:55.075 "ddgst": false, 00:18:55.075 "dhchap_key": "key0", 00:18:55.075 "dhchap_ctrlr_key": "key1", 00:18:55.075 "method": "bdev_nvme_attach_controller", 00:18:55.075 "req_id": 1 00:18:55.075 } 00:18:55.075 Got JSON-RPC error response 00:18:55.075 response: 00:18:55.075 { 00:18:55.075 "code": -5, 00:18:55.075 "message": "Input/output error" 00:18:55.075 } 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@645 -- # es=1 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.075 23:44:40 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:55.075 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1445187 ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1445187' 00:18:55.075 killing process with pid 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1445187 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:55.075 rmmod nvme_rdma 00:18:55.075 rmmod nvme_fabrics 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1477845 ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1477845 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@942 -- # '[' -z 1477845 ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@946 -- # kill -0 1477845 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # uname 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1477845 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1477845' 00:18:55.075 killing process with pid 1477845 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@961 -- # kill 1477845 00:18:55.075 23:44:41 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@966 -- # wait 1477845 00:18:55.075 23:44:42 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:55.075 23:44:42 nvmf_rdma.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:55.075 23:44:42 nvmf_rdma.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Mzj /tmp/spdk.key-sha256.29u /tmp/spdk.key-sha384.EDy /tmp/spdk.key-sha512.RD8 /tmp/spdk.key-sha512.BBP /tmp/spdk.key-sha384.qIR /tmp/spdk.key-sha256.rC8 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:55.075 00:18:55.075 real 4m20.281s 00:18:55.075 user 9m23.439s 00:18:55.075 sys 0m18.408s 00:18:55.075 23:44:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:18:55.075 23:44:42 nvmf_rdma.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.075 ************************************ 00:18:55.075 END TEST nvmf_auth_target 00:18:55.075 ************************************ 00:18:55.075 23:44:42 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@59 -- # '[' rdma = tcp ']' 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@72 -- # '[' rdma = tcp ']' 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@78 -- # [[ rdma == \r\d\m\a ]] 00:18:55.075 23:44:42 nvmf_rdma -- nvmf/nvmf.sh@81 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:55.075 23:44:42 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:18:55.075 23:44:42 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:18:55.075 23:44:42 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:18:55.075 ************************************ 00:18:55.075 START TEST nvmf_srq_overwhelm 00:18:55.075 ************************************ 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:55.075 * Looking for test storage... 00:18:55.075 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.075 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:18:55.076 23:44:42 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:59.264 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:59.264 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:59.264 Found net devices under 0000:da:00.0: mlx_0_0 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:59.264 Found net devices under 0000:da:00.1: mlx_0_1 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:59.264 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:59.265 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:59.265 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:18:59.265 altname enp218s0f0np0 00:18:59.265 altname ens818f0np0 00:18:59.265 inet 192.168.100.8/24 scope global mlx_0_0 00:18:59.265 valid_lft forever preferred_lft forever 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:59.265 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:59.265 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:18:59.265 altname enp218s0f1np1 00:18:59.265 altname ens818f1np1 00:18:59.265 inet 192.168.100.9/24 scope global mlx_0_1 00:18:59.265 valid_lft forever preferred_lft forever 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:59.265 192.168.100.9' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:59.265 192.168.100.9' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:59.265 192.168.100.9' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=1491671 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 1491671 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@823 -- # '[' -z 1491671 ']' 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@828 -- # local max_retries=100 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@832 -- # xtrace_disable 00:18:59.265 23:44:47 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.265 [2024-07-15 23:44:47.788140] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:18:59.265 [2024-07-15 23:44:47.788181] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:59.265 [2024-07-15 23:44:47.843430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:59.265 [2024-07-15 23:44:47.925153] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:59.265 [2024-07-15 23:44:47.925189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:59.265 [2024-07-15 23:44:47.925196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:59.265 [2024-07-15 23:44:47.925202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:59.265 [2024-07-15 23:44:47.925207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:59.265 [2024-07-15 23:44:47.925257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.265 [2024-07-15 23:44:47.925352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.265 [2024-07-15 23:44:47.925439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:59.265 [2024-07-15 23:44:47.925440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@856 -- # return 0 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 [2024-07-15 23:44:48.662508] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x794cc0/0x7991b0) succeed. 00:18:59.833 [2024-07-15 23:44:48.671664] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x796300/0x7da840) succeed. 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 Malloc0 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:59.833 [2024-07-15 23:44:48.766367] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:18:59.833 23:44:48 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme0n1 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:00.767 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.025 Malloc1 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.025 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.026 23:44:49 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme1n1 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:19:01.960 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.961 Malloc2 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:01.961 23:44:50 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme2n1 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:02.897 Malloc3 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:02.897 23:44:51 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme3n1 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 Malloc4 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:04.271 23:44:52 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme4n1 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 Malloc5 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:05.206 23:44:53 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:19:06.140 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:19:06.140 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1229 -- # local i=0 00:19:06.140 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # lsblk -l -o NAME 00:19:06.140 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1230 -- # grep -q -w nvme5n1 00:19:06.141 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:19:06.141 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:19:06.141 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1240 -- # return 0 00:19:06.141 23:44:54 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:19:06.141 [global] 00:19:06.141 thread=1 00:19:06.141 invalidate=1 00:19:06.141 rw=read 00:19:06.141 time_based=1 00:19:06.141 runtime=10 00:19:06.141 ioengine=libaio 00:19:06.141 direct=1 00:19:06.141 bs=1048576 00:19:06.141 iodepth=128 00:19:06.141 norandommap=1 00:19:06.141 numjobs=13 00:19:06.141 00:19:06.141 [job0] 00:19:06.141 filename=/dev/nvme0n1 00:19:06.141 [job1] 00:19:06.141 filename=/dev/nvme1n1 00:19:06.141 [job2] 00:19:06.141 filename=/dev/nvme2n1 00:19:06.141 [job3] 00:19:06.141 filename=/dev/nvme3n1 00:19:06.141 [job4] 00:19:06.141 filename=/dev/nvme4n1 00:19:06.141 [job5] 00:19:06.141 filename=/dev/nvme5n1 00:19:06.141 Could not set queue depth (nvme0n1) 00:19:06.141 Could not set queue depth (nvme1n1) 00:19:06.141 Could not set queue depth (nvme2n1) 00:19:06.141 Could not set queue depth (nvme3n1) 00:19:06.141 Could not set queue depth (nvme4n1) 00:19:06.141 Could not set queue depth (nvme5n1) 00:19:06.398 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:19:06.398 ... 00:19:06.398 fio-3.35 00:19:06.398 Starting 78 threads 00:19:21.281 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493114: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=6, BW=6594KiB/s (6752kB/s)(83.0MiB/12890msec) 00:19:21.281 slat (usec): min=393, max=2138.4k, avg=129587.24, stdev=469025.34 00:19:21.281 clat (msec): min=2133, max=12888, avg=11264.54, stdev=2591.86 00:19:21.281 lat (msec): min=4143, max=12889, avg=11394.13, stdev=2390.86 00:19:21.281 clat percentiles (msec): 00:19:21.281 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 8423], 20.00th=[ 8490], 00:19:21.281 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:19:21.281 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.281 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:21.281 | 99.99th=[12953] 00:19:21.281 lat (msec) : >=2000=100.00% 00:19:21.281 cpu : usr=0.00%, sys=0.40%, ctx=169, majf=0, minf=21249 00:19:21.281 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:19:21.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:21.281 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493115: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=57, BW=57.4MiB/s (60.2MB/s)(618MiB/10767msec) 00:19:21.281 slat (usec): min=36, max=2061.7k, avg=17415.41, stdev=142705.80 00:19:21.281 clat (usec): min=1698, max=5026.8k, avg=2083254.24, stdev=1488451.59 00:19:21.281 lat (msec): min=775, max=5029, avg=2100.67, stdev=1489.21 00:19:21.281 clat percentiles (msec): 00:19:21.281 | 1.00th=[ 785], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 818], 00:19:21.281 | 30.00th=[ 860], 40.00th=[ 902], 50.00th=[ 1183], 60.00th=[ 2567], 00:19:21.281 | 70.00th=[ 2869], 80.00th=[ 3071], 90.00th=[ 4866], 95.00th=[ 4933], 00:19:21.281 | 99.00th=[ 4933], 99.50th=[ 4933], 99.90th=[ 5000], 99.95th=[ 5000], 00:19:21.281 | 99.99th=[ 5000] 00:19:21.281 bw ( KiB/s): min= 6144, max=161792, per=3.00%, avg=91229.09, stdev=55902.25, samples=11 00:19:21.281 iops : min= 6, max= 158, avg=89.09, stdev=54.59, samples=11 00:19:21.281 lat (msec) : 2=0.16%, 1000=44.82%, 2000=9.71%, >=2000=45.31% 00:19:21.281 cpu : usr=0.02%, sys=1.09%, ctx=612, majf=0, minf=32769 00:19:21.281 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:19:21.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.281 issued rwts: total=618,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493116: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=1, BW=1272KiB/s (1303kB/s)(16.0MiB/12877msec) 00:19:21.281 slat (usec): min=1555, max=2132.2k, avg=672191.66, stdev=1011116.19 00:19:21.281 clat (msec): min=2121, max=12874, avg=10280.25, stdev=3437.95 00:19:21.281 lat (msec): min=4253, max=12876, avg=10952.44, stdev=2710.91 00:19:21.281 clat percentiles (msec): 00:19:21.281 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4245], 20.00th=[ 8490], 00:19:21.281 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:19:21.281 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.281 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.281 | 99.99th=[12818] 00:19:21.281 lat (msec) : >=2000=100.00% 00:19:21.281 cpu : usr=0.00%, sys=0.09%, ctx=68, majf=0, minf=4097 00:19:21.281 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:19:21.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 issued rwts: total=16,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493117: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=3, BW=3261KiB/s (3339kB/s)(41.0MiB/12876msec) 00:19:21.281 slat (usec): min=351, max=2150.6k, avg=261857.98, stdev=679522.24 00:19:21.281 clat (msec): min=2138, max=12874, avg=6792.43, stdev=3684.73 00:19:21.281 lat (msec): min=4074, max=12874, avg=7054.29, stdev=3726.99 00:19:21.281 clat percentiles (msec): 00:19:21.281 | 1.00th=[ 2140], 5.00th=[ 4077], 10.00th=[ 4077], 20.00th=[ 4077], 00:19:21.281 | 30.00th=[ 4077], 40.00th=[ 4144], 50.00th=[ 4212], 60.00th=[ 6342], 00:19:21.281 | 70.00th=[ 8490], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:19:21.281 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.281 | 99.99th=[12818] 00:19:21.281 lat (msec) : >=2000=100.00% 00:19:21.281 cpu : usr=0.00%, sys=0.17%, ctx=104, majf=0, minf=10497 00:19:21.281 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:21.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.281 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493118: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=2, BW=2315KiB/s (2370kB/s)(29.0MiB/12828msec) 00:19:21.281 slat (usec): min=342, max=2131.5k, avg=368578.60, stdev=801541.68 00:19:21.281 clat (msec): min=2138, max=10695, avg=5743.89, stdev=2490.71 00:19:21.281 lat (msec): min=4207, max=12827, avg=6112.47, stdev=2718.56 00:19:21.281 clat percentiles (msec): 00:19:21.281 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:21.281 | 30.00th=[ 4279], 40.00th=[ 4279], 50.00th=[ 4279], 60.00th=[ 4279], 00:19:21.281 | 70.00th=[ 6409], 80.00th=[ 8557], 90.00th=[10671], 95.00th=[10671], 00:19:21.281 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:21.281 | 99.99th=[10671] 00:19:21.281 lat (msec) : >=2000=100.00% 00:19:21.281 cpu : usr=0.00%, sys=0.12%, ctx=48, majf=0, minf=7425 00:19:21.281 IO depths : 1=3.4%, 2=6.9%, 4=13.8%, 8=27.6%, 16=48.3%, 32=0.0%, >=64=0.0% 00:19:21.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.281 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:21.281 issued rwts: total=29,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.281 job0: (groupid=0, jobs=1): err= 0: pid=1493119: Mon Jul 15 23:45:08 2024 00:19:21.281 read: IOPS=17, BW=17.7MiB/s (18.6MB/s)(229MiB/12902msec) 00:19:21.282 slat (usec): min=91, max=2168.1k, avg=47036.47, stdev=286316.91 00:19:21.282 clat (msec): min=558, max=12465, avg=7022.55, stdev=5390.61 00:19:21.282 lat (msec): min=560, max=12473, avg=7069.58, stdev=5390.90 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 567], 5.00th=[ 600], 10.00th=[ 625], 20.00th=[ 642], 00:19:21.282 | 30.00th=[ 676], 40.00th=[ 4010], 50.00th=[ 8557], 60.00th=[12013], 00:19:21.282 | 70.00th=[12147], 80.00th=[12281], 90.00th=[12416], 95.00th=[12416], 00:19:21.282 | 99.00th=[12416], 99.50th=[12416], 99.90th=[12416], 99.95th=[12416], 00:19:21.282 | 99.99th=[12416] 00:19:21.282 bw ( KiB/s): min= 1851, max=174080, per=1.15%, avg=34783.17, stdev=68558.07, samples=6 00:19:21.282 iops : min= 1, max= 170, avg=33.83, stdev=67.03, samples=6 00:19:21.282 lat (msec) : 750=37.12%, 2000=0.44%, >=2000=62.45% 00:19:21.282 cpu : usr=0.03%, sys=0.89%, ctx=259, majf=0, minf=32769 00:19:21.282 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=14.0%, >=64=72.5% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:19:21.282 issued rwts: total=229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493120: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=3, BW=3971KiB/s (4066kB/s)(50.0MiB/12895msec) 00:19:21.282 slat (usec): min=383, max=2106.7k, avg=215457.61, stdev=624981.20 00:19:21.282 clat (msec): min=2121, max=12893, avg=10651.64, stdev=2707.23 00:19:21.282 lat (msec): min=4184, max=12894, avg=10867.10, stdev=2428.87 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8557], 00:19:21.282 | 30.00th=[10671], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:19:21.282 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:19:21.282 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:21.282 | 99.99th=[12953] 00:19:21.282 lat (msec) : >=2000=100.00% 00:19:21.282 cpu : usr=0.00%, sys=0.26%, ctx=92, majf=0, minf=12801 00:19:21.282 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.282 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493121: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=33, BW=33.7MiB/s (35.3MB/s)(432MiB/12814msec) 00:19:21.282 slat (usec): min=44, max=2085.3k, avg=24691.63, stdev=190606.71 00:19:21.282 clat (msec): min=660, max=11139, avg=3651.66, stdev=4429.42 00:19:21.282 lat (msec): min=662, max=11142, avg=3676.35, stdev=4441.04 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 659], 5.00th=[ 667], 10.00th=[ 667], 20.00th=[ 667], 00:19:21.282 | 30.00th=[ 667], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 768], 00:19:21.282 | 70.00th=[ 4799], 80.00th=[10537], 90.00th=[10939], 95.00th=[10939], 00:19:21.282 | 99.00th=[11073], 99.50th=[11073], 99.90th=[11073], 99.95th=[11073], 00:19:21.282 | 99.99th=[11073] 00:19:21.282 bw ( KiB/s): min= 1450, max=200704, per=2.28%, avg=69295.67, stdev=79984.16, samples=9 00:19:21.282 iops : min= 1, max= 196, avg=67.56, stdev=78.04, samples=9 00:19:21.282 lat (msec) : 750=59.03%, 1000=8.33%, >=2000=32.64% 00:19:21.282 cpu : usr=0.02%, sys=0.66%, ctx=452, majf=0, minf=32769 00:19:21.282 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.7%, 32=7.4%, >=64=85.4% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.282 issued rwts: total=432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493122: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=2, BW=2149KiB/s (2201kB/s)(27.0MiB/12865msec) 00:19:21.282 slat (usec): min=1529, max=2107.5k, avg=397692.64, stdev=797263.25 00:19:21.282 clat (msec): min=2126, max=12832, avg=9247.14, stdev=3362.84 00:19:21.282 lat (msec): min=4232, max=12863, avg=9644.83, stdev=3114.05 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:21.282 | 30.00th=[ 6342], 40.00th=[ 8423], 50.00th=[ 8557], 60.00th=[12550], 00:19:21.282 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:19:21.282 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.282 | 99.99th=[12818] 00:19:21.282 lat (msec) : >=2000=100.00% 00:19:21.282 cpu : usr=0.00%, sys=0.11%, ctx=100, majf=0, minf=6913 00:19:21.282 IO depths : 1=3.7%, 2=7.4%, 4=14.8%, 8=29.6%, 16=44.4%, 32=0.0%, >=64=0.0% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:21.282 issued rwts: total=27,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493123: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=4, BW=5096KiB/s (5218kB/s)(64.0MiB/12861msec) 00:19:21.282 slat (usec): min=534, max=3810.1k, avg=167787.54, stdev=643720.95 00:19:21.282 clat (msec): min=2121, max=12817, avg=10148.10, stdev=1666.83 00:19:21.282 lat (msec): min=4223, max=12859, avg=10315.88, stdev=1357.86 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 2123], 5.00th=[ 6409], 10.00th=[10268], 20.00th=[10268], 00:19:21.282 | 30.00th=[10268], 40.00th=[10402], 50.00th=[10402], 60.00th=[10537], 00:19:21.282 | 70.00th=[10537], 80.00th=[10537], 90.00th=[10671], 95.00th=[10671], 00:19:21.282 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.282 | 99.99th=[12818] 00:19:21.282 lat (msec) : >=2000=100.00% 00:19:21.282 cpu : usr=0.00%, sys=0.27%, ctx=138, majf=0, minf=16385 00:19:21.282 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.282 issued rwts: total=64,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493124: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=2, BW=2392KiB/s (2450kB/s)(30.0MiB/12841msec) 00:19:21.282 slat (usec): min=760, max=2100.4k, avg=356829.34, stdev=780864.32 00:19:21.282 clat (msec): min=2135, max=12839, avg=9961.23, stdev=3663.79 00:19:21.282 lat (msec): min=4202, max=12840, avg=10318.06, stdev=3386.11 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:21.282 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12684], 60.00th=[12818], 00:19:21.282 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.282 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.282 | 99.99th=[12818] 00:19:21.282 lat (msec) : >=2000=100.00% 00:19:21.282 cpu : usr=0.00%, sys=0.18%, ctx=60, majf=0, minf=7681 00:19:21.282 IO depths : 1=3.3%, 2=6.7%, 4=13.3%, 8=26.7%, 16=50.0%, 32=0.0%, >=64=0.0% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:21.282 issued rwts: total=30,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493125: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=30, BW=30.6MiB/s (32.0MB/s)(327MiB/10702msec) 00:19:21.282 slat (usec): min=40, max=2126.9k, avg=32639.85, stdev=223078.36 00:19:21.282 clat (msec): min=27, max=8574, avg=3139.06, stdev=2666.76 00:19:21.282 lat (msec): min=856, max=10701, avg=3171.70, stdev=2683.77 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 852], 5.00th=[ 860], 10.00th=[ 860], 20.00th=[ 869], 00:19:21.282 | 30.00th=[ 869], 40.00th=[ 869], 50.00th=[ 944], 60.00th=[ 3171], 00:19:21.282 | 70.00th=[ 6409], 80.00th=[ 6678], 90.00th=[ 6879], 95.00th=[ 7080], 00:19:21.282 | 99.00th=[ 7080], 99.50th=[ 8490], 99.90th=[ 8557], 99.95th=[ 8557], 00:19:21.282 | 99.99th=[ 8557] 00:19:21.282 bw ( KiB/s): min= 4096, max=155648, per=2.24%, avg=67912.33, stdev=61847.66, samples=6 00:19:21.282 iops : min= 4, max= 152, avg=66.17, stdev=60.49, samples=6 00:19:21.282 lat (msec) : 50=0.31%, 1000=53.21%, >=2000=46.48% 00:19:21.282 cpu : usr=0.02%, sys=0.86%, ctx=330, majf=0, minf=32769 00:19:21.282 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.9%, 32=9.8%, >=64=80.7% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:21.282 issued rwts: total=327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job0: (groupid=0, jobs=1): err= 0: pid=1493126: Mon Jul 15 23:45:08 2024 00:19:21.282 read: IOPS=2, BW=3011KiB/s (3083kB/s)(38.0MiB/12924msec) 00:19:21.282 slat (usec): min=748, max=2170.6k, avg=284124.23, stdev=724321.04 00:19:21.282 clat (msec): min=2126, max=12922, avg=11962.02, stdev=2494.52 00:19:21.282 lat (msec): min=4208, max=12923, avg=12246.14, stdev=1884.16 00:19:21.282 clat percentiles (msec): 00:19:21.282 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 8490], 20.00th=[12818], 00:19:21.282 | 30.00th=[12818], 40.00th=[12818], 50.00th=[12818], 60.00th=[12953], 00:19:21.282 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:21.282 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:21.282 | 99.99th=[12953] 00:19:21.282 lat (msec) : >=2000=100.00% 00:19:21.282 cpu : usr=0.00%, sys=0.25%, ctx=87, majf=0, minf=9729 00:19:21.282 IO depths : 1=2.6%, 2=5.3%, 4=10.5%, 8=21.1%, 16=42.1%, 32=18.4%, >=64=0.0% 00:19:21.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.282 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.282 issued rwts: total=38,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.282 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.282 job1: (groupid=0, jobs=1): err= 0: pid=1493127: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=12, BW=12.2MiB/s (12.8MB/s)(156MiB/12816msec) 00:19:21.283 slat (usec): min=101, max=2148.7k, avg=68445.45, stdev=328070.76 00:19:21.283 clat (msec): min=2137, max=7731, avg=5679.69, stdev=962.17 00:19:21.283 lat (msec): min=3452, max=8394, avg=5748.14, stdev=950.18 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 3440], 5.00th=[ 3473], 10.00th=[ 5000], 20.00th=[ 5134], 00:19:21.283 | 30.00th=[ 5269], 40.00th=[ 5470], 50.00th=[ 5604], 60.00th=[ 5738], 00:19:21.283 | 70.00th=[ 6007], 80.00th=[ 6409], 90.00th=[ 6409], 95.00th=[ 7684], 00:19:21.283 | 99.00th=[ 7752], 99.50th=[ 7752], 99.90th=[ 7752], 99.95th=[ 7752], 00:19:21.283 | 99.99th=[ 7752] 00:19:21.283 bw ( KiB/s): min= 1428, max=55296, per=0.65%, avg=19590.67, stdev=30923.28, samples=3 00:19:21.283 iops : min= 1, max= 54, avg=19.00, stdev=30.32, samples=3 00:19:21.283 lat (msec) : >=2000=100.00% 00:19:21.283 cpu : usr=0.02%, sys=0.66%, ctx=272, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.6%, 2=1.3%, 4=2.6%, 8=5.1%, 16=10.3%, 32=20.5%, >=64=59.6% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=96.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=3.3% 00:19:21.283 issued rwts: total=156,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493128: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=76, BW=76.2MiB/s (79.9MB/s)(980MiB/12860msec) 00:19:21.283 slat (usec): min=38, max=2062.3k, avg=10942.52, stdev=111786.17 00:19:21.283 clat (msec): min=252, max=8788, avg=1568.94, stdev=2702.74 00:19:21.283 lat (msec): min=253, max=8789, avg=1579.89, stdev=2711.57 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 253], 5.00th=[ 255], 10.00th=[ 257], 20.00th=[ 259], 00:19:21.283 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 271], 60.00th=[ 275], 00:19:21.283 | 70.00th=[ 827], 80.00th=[ 1720], 90.00th=[ 8490], 95.00th=[ 8658], 00:19:21.283 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[ 8792], 99.95th=[ 8792], 00:19:21.283 | 99.99th=[ 8792] 00:19:21.283 bw ( KiB/s): min= 2048, max=497664, per=5.75%, avg=174694.40, stdev=203058.74, samples=10 00:19:21.283 iops : min= 2, max= 486, avg=170.60, stdev=198.30, samples=10 00:19:21.283 lat (msec) : 500=66.12%, 750=3.06%, 1000=2.65%, 2000=13.88%, >=2000=14.29% 00:19:21.283 cpu : usr=0.02%, sys=1.02%, ctx=1199, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.3%, >=64=93.6% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.283 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493129: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=45, BW=45.7MiB/s (47.9MB/s)(459MiB/10051msec) 00:19:21.283 slat (usec): min=40, max=2094.1k, avg=21788.29, stdev=165532.48 00:19:21.283 clat (msec): min=46, max=7875, avg=962.52, stdev=974.37 00:19:21.283 lat (msec): min=57, max=7878, avg=984.31, stdev=1026.23 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 63], 5.00th=[ 157], 10.00th=[ 292], 20.00th=[ 584], 00:19:21.283 | 30.00th=[ 802], 40.00th=[ 835], 50.00th=[ 852], 60.00th=[ 860], 00:19:21.283 | 70.00th=[ 869], 80.00th=[ 1011], 90.00th=[ 1301], 95.00th=[ 1569], 00:19:21.283 | 99.00th=[ 7752], 99.50th=[ 7752], 99.90th=[ 7886], 99.95th=[ 7886], 00:19:21.283 | 99.99th=[ 7886] 00:19:21.283 bw ( KiB/s): min=49152, max=173386, per=4.47%, avg=135848.40, stdev=49414.62, samples=5 00:19:21.283 iops : min= 48, max= 169, avg=132.60, stdev=48.20, samples=5 00:19:21.283 lat (msec) : 50=0.22%, 100=2.83%, 250=5.23%, 500=9.15%, 750=7.84% 00:19:21.283 lat (msec) : 1000=53.81%, 2000=17.43%, >=2000=3.49% 00:19:21.283 cpu : usr=0.03%, sys=1.21%, ctx=565, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.3% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.283 issued rwts: total=459,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493130: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=41, BW=42.0MiB/s (44.0MB/s)(541MiB/12895msec) 00:19:21.283 slat (usec): min=53, max=2081.2k, avg=19879.31, stdev=147137.01 00:19:21.283 clat (msec): min=402, max=6547, avg=2452.82, stdev=2330.85 00:19:21.283 lat (msec): min=405, max=6547, avg=2472.70, stdev=2337.00 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 409], 5.00th=[ 435], 10.00th=[ 477], 20.00th=[ 592], 00:19:21.283 | 30.00th=[ 709], 40.00th=[ 735], 50.00th=[ 785], 60.00th=[ 2366], 00:19:21.283 | 70.00th=[ 4732], 80.00th=[ 4933], 90.00th=[ 6275], 95.00th=[ 6477], 00:19:21.283 | 99.00th=[ 6544], 99.50th=[ 6544], 99.90th=[ 6544], 99.95th=[ 6544], 00:19:21.283 | 99.99th=[ 6544] 00:19:21.283 bw ( KiB/s): min= 2048, max=299008, per=3.10%, avg=94208.00, stdev=104085.96, samples=9 00:19:21.283 iops : min= 2, max= 292, avg=92.00, stdev=101.65, samples=9 00:19:21.283 lat (msec) : 500=12.01%, 750=31.61%, 1000=16.08%, >=2000=40.30% 00:19:21.283 cpu : usr=0.02%, sys=0.86%, ctx=825, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.4% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.283 issued rwts: total=541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493131: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=38, BW=38.6MiB/s (40.4MB/s)(413MiB/10708msec) 00:19:21.283 slat (usec): min=41, max=2062.6k, avg=25919.02, stdev=186635.79 00:19:21.283 clat (usec): min=1852, max=7816.6k, avg=3193001.21, stdev=2825044.56 00:19:21.283 lat (msec): min=525, max=7816, avg=3218.92, stdev=2827.79 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 527], 5.00th=[ 625], 10.00th=[ 625], 20.00th=[ 659], 00:19:21.283 | 30.00th=[ 802], 40.00th=[ 1687], 50.00th=[ 1854], 60.00th=[ 2072], 00:19:21.283 | 70.00th=[ 4665], 80.00th=[ 7684], 90.00th=[ 7819], 95.00th=[ 7819], 00:19:21.283 | 99.00th=[ 7819], 99.50th=[ 7819], 99.90th=[ 7819], 99.95th=[ 7819], 00:19:21.283 | 99.99th=[ 7819] 00:19:21.283 bw ( KiB/s): min=10240, max=221184, per=2.40%, avg=72963.00, stdev=76774.37, samples=8 00:19:21.283 iops : min= 10, max= 216, avg=71.25, stdev=74.98, samples=8 00:19:21.283 lat (msec) : 2=0.24%, 750=29.06%, 1000=1.21%, 2000=27.12%, >=2000=42.37% 00:19:21.283 cpu : usr=0.01%, sys=1.05%, ctx=428, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.7% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.283 issued rwts: total=413,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493132: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=41, BW=41.4MiB/s (43.4MB/s)(445MiB/10745msec) 00:19:21.283 slat (usec): min=50, max=3819.4k, avg=24137.23, stdev=205929.33 00:19:21.283 clat (msec): min=2, max=7209, avg=2803.24, stdev=2109.48 00:19:21.283 lat (msec): min=1127, max=7213, avg=2827.37, stdev=2110.92 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 1116], 5.00th=[ 1150], 10.00th=[ 1200], 20.00th=[ 1250], 00:19:21.283 | 30.00th=[ 1318], 40.00th=[ 1368], 50.00th=[ 1435], 60.00th=[ 1670], 00:19:21.283 | 70.00th=[ 3406], 80.00th=[ 4178], 90.00th=[ 6678], 95.00th=[ 6879], 00:19:21.283 | 99.00th=[ 7148], 99.50th=[ 7148], 99.90th=[ 7215], 99.95th=[ 7215], 00:19:21.283 | 99.99th=[ 7215] 00:19:21.283 bw ( KiB/s): min= 4096, max=118784, per=2.67%, avg=81130.50, stdev=35008.93, samples=8 00:19:21.283 iops : min= 4, max= 116, avg=79.12, stdev=34.17, samples=8 00:19:21.283 lat (msec) : 4=0.22%, 2000=61.35%, >=2000=38.43% 00:19:21.283 cpu : usr=0.01%, sys=0.87%, ctx=926, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.2%, >=64=85.8% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.283 issued rwts: total=445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493133: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=43, BW=43.3MiB/s (45.4MB/s)(555MiB/12823msec) 00:19:21.283 slat (usec): min=50, max=2067.5k, avg=19251.39, stdev=148963.27 00:19:21.283 clat (msec): min=653, max=9761, avg=2812.81, stdev=3354.24 00:19:21.283 lat (msec): min=654, max=9806, avg=2832.07, stdev=3364.46 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 651], 5.00th=[ 718], 10.00th=[ 743], 20.00th=[ 818], 00:19:21.283 | 30.00th=[ 827], 40.00th=[ 860], 50.00th=[ 936], 60.00th=[ 1083], 00:19:21.283 | 70.00th=[ 1368], 80.00th=[ 6409], 90.00th=[ 9329], 95.00th=[ 9597], 00:19:21.283 | 99.00th=[ 9731], 99.50th=[ 9731], 99.90th=[ 9731], 99.95th=[ 9731], 00:19:21.283 | 99.99th=[ 9731] 00:19:21.283 bw ( KiB/s): min= 1414, max=188416, per=2.62%, avg=79593.91, stdev=60066.34, samples=11 00:19:21.283 iops : min= 1, max= 184, avg=77.55, stdev=58.68, samples=11 00:19:21.283 lat (msec) : 750=12.43%, 1000=40.36%, 2000=21.26%, >=2000=25.95% 00:19:21.283 cpu : usr=0.02%, sys=0.76%, ctx=850, majf=0, minf=32769 00:19:21.283 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:19:21.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.283 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.283 issued rwts: total=555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.283 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.283 job1: (groupid=0, jobs=1): err= 0: pid=1493134: Mon Jul 15 23:45:08 2024 00:19:21.283 read: IOPS=83, BW=83.5MiB/s (87.5MB/s)(1078MiB/12913msec) 00:19:21.283 slat (usec): min=43, max=2148.3k, avg=9989.07, stdev=91028.74 00:19:21.283 clat (msec): min=462, max=7957, avg=1478.87, stdev=2175.70 00:19:21.283 lat (msec): min=465, max=7961, avg=1488.86, stdev=2183.07 00:19:21.283 clat percentiles (msec): 00:19:21.283 | 1.00th=[ 468], 5.00th=[ 485], 10.00th=[ 506], 20.00th=[ 518], 00:19:21.283 | 30.00th=[ 527], 40.00th=[ 542], 50.00th=[ 575], 60.00th=[ 651], 00:19:21.283 | 70.00th=[ 869], 80.00th=[ 1183], 90.00th=[ 7148], 95.00th=[ 7617], 00:19:21.283 | 99.00th=[ 7886], 99.50th=[ 7953], 99.90th=[ 7953], 99.95th=[ 7953], 00:19:21.283 | 99.99th=[ 7953] 00:19:21.284 bw ( KiB/s): min= 1851, max=264192, per=4.58%, avg=139097.79, stdev=94577.36, samples=14 00:19:21.284 iops : min= 1, max= 258, avg=135.64, stdev=92.46, samples=14 00:19:21.284 lat (msec) : 500=9.46%, 750=54.08%, 1000=11.50%, 2000=12.71%, >=2000=12.24% 00:19:21.284 cpu : usr=0.07%, sys=1.27%, ctx=1465, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.284 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job1: (groupid=0, jobs=1): err= 0: pid=1493135: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=19, BW=19.0MiB/s (19.9MB/s)(245MiB/12883msec) 00:19:21.284 slat (usec): min=384, max=2090.6k, avg=43818.03, stdev=262563.00 00:19:21.284 clat (msec): min=1123, max=11825, avg=6395.47, stdev=4737.06 00:19:21.284 lat (msec): min=1125, max=11829, avg=6439.28, stdev=4737.80 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 1133], 5.00th=[ 1167], 10.00th=[ 1183], 20.00th=[ 1267], 00:19:21.284 | 30.00th=[ 1318], 40.00th=[ 1334], 50.00th=[ 6409], 60.00th=[10805], 00:19:21.284 | 70.00th=[11073], 80.00th=[11342], 90.00th=[11610], 95.00th=[11745], 00:19:21.284 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:21.284 | 99.99th=[11879] 00:19:21.284 bw ( KiB/s): min= 2048, max=94019, per=0.99%, avg=30181.12, stdev=35557.01, samples=8 00:19:21.284 iops : min= 2, max= 91, avg=29.12, stdev=34.71, samples=8 00:19:21.284 lat (msec) : 2000=42.04%, >=2000=57.96% 00:19:21.284 cpu : usr=0.00%, sys=0.68%, ctx=514, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.4%, 2=0.8%, 4=1.6%, 8=3.3%, 16=6.5%, 32=13.1%, >=64=74.3% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.8% 00:19:21.284 issued rwts: total=245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job1: (groupid=0, jobs=1): err= 0: pid=1493136: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=34, BW=34.3MiB/s (35.9MB/s)(439MiB/12817msec) 00:19:21.284 slat (usec): min=60, max=2053.1k, avg=24321.50, stdev=184279.98 00:19:21.284 clat (msec): min=377, max=12639, avg=2973.41, stdev=3266.18 00:19:21.284 lat (msec): min=380, max=12652, avg=2997.74, stdev=3287.37 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 380], 5.00th=[ 380], 10.00th=[ 384], 20.00th=[ 388], 00:19:21.284 | 30.00th=[ 542], 40.00th=[ 718], 50.00th=[ 986], 60.00th=[ 2635], 00:19:21.284 | 70.00th=[ 3205], 80.00th=[ 6275], 90.00th=[ 8658], 95.00th=[ 8792], 00:19:21.284 | 99.00th=[ 8792], 99.50th=[ 8792], 99.90th=[12684], 99.95th=[12684], 00:19:21.284 | 99.99th=[12684] 00:19:21.284 bw ( KiB/s): min= 1436, max=335872, per=3.00%, avg=91194.86, stdev=121414.43, samples=7 00:19:21.284 iops : min= 1, max= 328, avg=89.00, stdev=118.62, samples=7 00:19:21.284 lat (msec) : 500=28.70%, 750=12.07%, 1000=9.79%, 2000=8.43%, >=2000=41.00% 00:19:21.284 cpu : usr=0.03%, sys=0.71%, ctx=708, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.3%, >=64=85.6% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.284 issued rwts: total=439,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job1: (groupid=0, jobs=1): err= 0: pid=1493137: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(402MiB/10813msec) 00:19:21.284 slat (usec): min=38, max=2101.1k, avg=24881.48, stdev=177146.50 00:19:21.284 clat (msec): min=405, max=5868, avg=2389.69, stdev=1667.27 00:19:21.284 lat (msec): min=405, max=5870, avg=2414.58, stdev=1674.56 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 409], 5.00th=[ 456], 10.00th=[ 642], 20.00th=[ 911], 00:19:21.284 | 30.00th=[ 1150], 40.00th=[ 1318], 50.00th=[ 1670], 60.00th=[ 2702], 00:19:21.284 | 70.00th=[ 3339], 80.00th=[ 3641], 90.00th=[ 5537], 95.00th=[ 5738], 00:19:21.284 | 99.00th=[ 5805], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:19:21.284 | 99.99th=[ 5873] 00:19:21.284 bw ( KiB/s): min= 2048, max=268288, per=3.09%, avg=93866.67, stdev=92195.64, samples=6 00:19:21.284 iops : min= 2, max= 262, avg=91.67, stdev=90.03, samples=6 00:19:21.284 lat (msec) : 500=5.97%, 750=6.47%, 1000=11.19%, 2000=30.60%, >=2000=45.77% 00:19:21.284 cpu : usr=0.03%, sys=1.05%, ctx=747, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.0%, 32=8.0%, >=64=84.3% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:21.284 issued rwts: total=402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job1: (groupid=0, jobs=1): err= 0: pid=1493138: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=24, BW=24.2MiB/s (25.4MB/s)(311MiB/12863msec) 00:19:21.284 slat (usec): min=373, max=2081.2k, avg=34455.01, stdev=232813.88 00:19:21.284 clat (msec): min=816, max=11451, avg=5038.39, stdev=4807.49 00:19:21.284 lat (msec): min=821, max=11454, avg=5072.84, stdev=4814.79 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 818], 5.00th=[ 827], 10.00th=[ 827], 20.00th=[ 852], 00:19:21.284 | 30.00th=[ 885], 40.00th=[ 944], 50.00th=[ 1020], 60.00th=[ 6409], 00:19:21.284 | 70.00th=[10805], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:19:21.284 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:19:21.284 | 99.99th=[11476] 00:19:21.284 bw ( KiB/s): min= 2043, max=151552, per=1.38%, avg=41868.67, stdev=62925.94, samples=9 00:19:21.284 iops : min= 1, max= 148, avg=40.67, stdev=61.61, samples=9 00:19:21.284 lat (msec) : 1000=48.23%, 2000=6.11%, >=2000=45.66% 00:19:21.284 cpu : usr=0.03%, sys=0.68%, ctx=525, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.3%, 2=0.6%, 4=1.3%, 8=2.6%, 16=5.1%, 32=10.3%, >=64=79.7% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:21.284 issued rwts: total=311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job1: (groupid=0, jobs=1): err= 0: pid=1493139: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=45, BW=45.6MiB/s (47.8MB/s)(586MiB/12847msec) 00:19:21.284 slat (usec): min=41, max=2065.6k, avg=18297.53, stdev=126736.15 00:19:21.284 clat (msec): min=696, max=6411, avg=1903.41, stdev=1418.23 00:19:21.284 lat (msec): min=756, max=7084, avg=1921.71, stdev=1431.53 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 760], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 818], 00:19:21.284 | 30.00th=[ 844], 40.00th=[ 1150], 50.00th=[ 1401], 60.00th=[ 1435], 00:19:21.284 | 70.00th=[ 1787], 80.00th=[ 3339], 90.00th=[ 4665], 95.00th=[ 4866], 00:19:21.284 | 99.00th=[ 5201], 99.50th=[ 6342], 99.90th=[ 6409], 99.95th=[ 6409], 00:19:21.284 | 99.99th=[ 6409] 00:19:21.284 bw ( KiB/s): min= 1370, max=186368, per=2.81%, avg=85395.82, stdev=62338.80, samples=11 00:19:21.284 iops : min= 1, max= 182, avg=83.36, stdev=60.92, samples=11 00:19:21.284 lat (msec) : 750=0.51%, 1000=37.54%, 2000=36.35%, >=2000=25.60% 00:19:21.284 cpu : usr=0.01%, sys=0.69%, ctx=837, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.5%, >=64=89.2% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.284 issued rwts: total=586,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job2: (groupid=0, jobs=1): err= 0: pid=1493140: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=79, BW=79.7MiB/s (83.6MB/s)(800MiB/10032msec) 00:19:21.284 slat (usec): min=40, max=118092, avg=12499.17, stdev=18303.79 00:19:21.284 clat (msec): min=28, max=2610, avg=1494.82, stdev=627.45 00:19:21.284 lat (msec): min=46, max=2632, avg=1507.32, stdev=629.67 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 94], 5.00th=[ 368], 10.00th=[ 676], 20.00th=[ 927], 00:19:21.284 | 30.00th=[ 1116], 40.00th=[ 1385], 50.00th=[ 1452], 60.00th=[ 1603], 00:19:21.284 | 70.00th=[ 1972], 80.00th=[ 2165], 90.00th=[ 2333], 95.00th=[ 2366], 00:19:21.284 | 99.00th=[ 2567], 99.50th=[ 2601], 99.90th=[ 2601], 99.95th=[ 2601], 00:19:21.284 | 99.99th=[ 2601] 00:19:21.284 bw ( KiB/s): min=38912, max=190464, per=2.52%, avg=76552.17, stdev=41471.91, samples=18 00:19:21.284 iops : min= 38, max= 186, avg=74.67, stdev=40.48, samples=18 00:19:21.284 lat (msec) : 50=0.25%, 100=0.75%, 250=2.25%, 500=4.25%, 750=4.75% 00:19:21.284 lat (msec) : 1000=11.62%, 2000=48.00%, >=2000=28.12% 00:19:21.284 cpu : usr=0.03%, sys=1.36%, ctx=1885, majf=0, minf=32769 00:19:21.284 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.284 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.284 issued rwts: total=800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.284 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.284 job2: (groupid=0, jobs=1): err= 0: pid=1493141: Mon Jul 15 23:45:08 2024 00:19:21.284 read: IOPS=3, BW=3764KiB/s (3855kB/s)(47.0MiB/12785msec) 00:19:21.284 slat (usec): min=720, max=2085.2k, avg=226496.01, stdev=631010.39 00:19:21.284 clat (msec): min=2138, max=12738, avg=9107.12, stdev=3292.49 00:19:21.284 lat (msec): min=4186, max=12784, avg=9333.61, stdev=3166.45 00:19:21.284 clat percentiles (msec): 00:19:21.284 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:19:21.284 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[10671], 00:19:21.284 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12684], 00:19:21.284 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:19:21.284 | 99.99th=[12684] 00:19:21.284 lat (msec) : >=2000=100.00% 00:19:21.284 cpu : usr=0.00%, sys=0.24%, ctx=87, majf=0, minf=12033 00:19:21.284 IO depths : 1=2.1%, 2=4.3%, 4=8.5%, 8=17.0%, 16=34.0%, 32=34.0%, >=64=0.0% 00:19:21.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.285 issued rwts: total=47,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493142: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=82, BW=82.1MiB/s (86.1MB/s)(828MiB/10084msec) 00:19:21.285 slat (usec): min=29, max=118534, avg=12127.06, stdev=20642.58 00:19:21.285 clat (msec): min=39, max=2099, avg=1415.64, stdev=410.78 00:19:21.285 lat (msec): min=97, max=2102, avg=1427.77, stdev=411.47 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 192], 5.00th=[ 693], 10.00th=[ 885], 20.00th=[ 1070], 00:19:21.285 | 30.00th=[ 1167], 40.00th=[ 1401], 50.00th=[ 1519], 60.00th=[ 1586], 00:19:21.285 | 70.00th=[ 1670], 80.00th=[ 1770], 90.00th=[ 1888], 95.00th=[ 1972], 00:19:21.285 | 99.00th=[ 2072], 99.50th=[ 2072], 99.90th=[ 2106], 99.95th=[ 2106], 00:19:21.285 | 99.99th=[ 2106] 00:19:21.285 bw ( KiB/s): min=53248, max=163840, per=2.78%, avg=84339.76, stdev=29755.44, samples=17 00:19:21.285 iops : min= 52, max= 160, avg=82.24, stdev=29.14, samples=17 00:19:21.285 lat (msec) : 50=0.12%, 100=0.12%, 250=1.21%, 500=2.17%, 750=1.57% 00:19:21.285 lat (msec) : 1000=10.39%, 2000=81.64%, >=2000=2.78% 00:19:21.285 cpu : usr=0.00%, sys=1.15%, ctx=1820, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.285 issued rwts: total=828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493143: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=30, BW=30.6MiB/s (32.1MB/s)(394MiB/12860msec) 00:19:21.285 slat (usec): min=85, max=2063.2k, avg=27186.64, stdev=180031.89 00:19:21.285 clat (msec): min=744, max=10711, avg=3958.32, stdev=2453.48 00:19:21.285 lat (msec): min=767, max=10735, avg=3985.51, stdev=2473.16 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 776], 5.00th=[ 818], 10.00th=[ 885], 20.00th=[ 1011], 00:19:21.285 | 30.00th=[ 1116], 40.00th=[ 4732], 50.00th=[ 5067], 60.00th=[ 5336], 00:19:21.285 | 70.00th=[ 5604], 80.00th=[ 5805], 90.00th=[ 6074], 95.00th=[ 6409], 00:19:21.285 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:21.285 | 99.99th=[10671] 00:19:21.285 bw ( KiB/s): min= 2048, max=172032, per=1.64%, avg=49682.18, stdev=50587.70, samples=11 00:19:21.285 iops : min= 2, max= 168, avg=48.27, stdev=49.40, samples=11 00:19:21.285 lat (msec) : 750=0.25%, 1000=18.78%, 2000=17.77%, >=2000=63.20% 00:19:21.285 cpu : usr=0.04%, sys=0.84%, ctx=800, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.3%, 2=0.5%, 4=1.0%, 8=2.0%, 16=4.1%, 32=8.1%, >=64=84.0% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:21.285 issued rwts: total=394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493144: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=71, BW=71.4MiB/s (74.8MB/s)(717MiB/10045msec) 00:19:21.285 slat (usec): min=41, max=1960.6k, avg=13945.99, stdev=75257.57 00:19:21.285 clat (msec): min=42, max=3800, avg=1292.01, stdev=634.09 00:19:21.285 lat (msec): min=83, max=3825, avg=1305.95, stdev=642.76 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 194], 5.00th=[ 642], 10.00th=[ 667], 20.00th=[ 709], 00:19:21.285 | 30.00th=[ 751], 40.00th=[ 927], 50.00th=[ 1183], 60.00th=[ 1452], 00:19:21.285 | 70.00th=[ 1670], 80.00th=[ 1854], 90.00th=[ 2140], 95.00th=[ 2333], 00:19:21.285 | 99.00th=[ 2567], 99.50th=[ 3775], 99.90th=[ 3809], 99.95th=[ 3809], 00:19:21.285 | 99.99th=[ 3809] 00:19:21.285 bw ( KiB/s): min=26624, max=190464, per=3.06%, avg=92890.08, stdev=56178.32, samples=13 00:19:21.285 iops : min= 26, max= 186, avg=90.54, stdev=54.84, samples=13 00:19:21.285 lat (msec) : 50=0.14%, 100=0.28%, 250=1.12%, 500=2.23%, 750=26.92% 00:19:21.285 lat (msec) : 1000=11.30%, 2000=44.49%, >=2000=13.53% 00:19:21.285 cpu : usr=0.03%, sys=1.61%, ctx=1305, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.285 issued rwts: total=717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493145: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=35, BW=35.2MiB/s (36.9MB/s)(450MiB/12798msec) 00:19:21.285 slat (usec): min=33, max=2053.0k, avg=23664.63, stdev=101889.18 00:19:21.285 clat (msec): min=1467, max=6902, avg=3303.68, stdev=1698.62 00:19:21.285 lat (msec): min=1473, max=6902, avg=3327.34, stdev=1698.40 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 1502], 5.00th=[ 1586], 10.00th=[ 1787], 20.00th=[ 1955], 00:19:21.285 | 30.00th=[ 2106], 40.00th=[ 2232], 50.00th=[ 2668], 60.00th=[ 2970], 00:19:21.285 | 70.00th=[ 3071], 80.00th=[ 5269], 90.00th=[ 6409], 95.00th=[ 6611], 00:19:21.285 | 99.00th=[ 6812], 99.50th=[ 6879], 99.90th=[ 6879], 99.95th=[ 6879], 00:19:21.285 | 99.99th=[ 6879] 00:19:21.285 bw ( KiB/s): min= 1450, max=94208, per=1.36%, avg=41298.44, stdev=28041.96, samples=16 00:19:21.285 iops : min= 1, max= 92, avg=40.25, stdev=27.38, samples=16 00:19:21.285 lat (msec) : 2000=24.67%, >=2000=75.33% 00:19:21.285 cpu : usr=0.02%, sys=0.64%, ctx=1335, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.8%, 16=3.6%, 32=7.1%, >=64=86.0% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.285 issued rwts: total=450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493146: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=2, BW=2791KiB/s (2858kB/s)(35.0MiB/12839msec) 00:19:21.285 slat (usec): min=729, max=2121.7k, avg=305676.96, stdev=741628.23 00:19:21.285 clat (msec): min=2139, max=12819, avg=10356.86, stdev=2944.62 00:19:21.285 lat (msec): min=4233, max=12838, avg=10662.53, stdev=2601.83 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 6409], 00:19:21.285 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[12818], 00:19:21.285 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.285 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.285 | 99.99th=[12818] 00:19:21.285 lat (msec) : >=2000=100.00% 00:19:21.285 cpu : usr=0.00%, sys=0.19%, ctx=64, majf=0, minf=8961 00:19:21.285 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.285 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493147: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=39, BW=39.3MiB/s (41.2MB/s)(422MiB/10735msec) 00:19:21.285 slat (usec): min=41, max=1980.6k, avg=23811.58, stdev=112915.25 00:19:21.285 clat (msec): min=683, max=7856, avg=2396.47, stdev=1055.93 00:19:21.285 lat (msec): min=738, max=8366, avg=2420.28, stdev=1079.82 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 751], 5.00th=[ 1116], 10.00th=[ 1150], 20.00th=[ 1418], 00:19:21.285 | 30.00th=[ 1636], 40.00th=[ 1804], 50.00th=[ 2333], 60.00th=[ 2601], 00:19:21.285 | 70.00th=[ 3037], 80.00th=[ 3608], 90.00th=[ 3842], 95.00th=[ 4010], 00:19:21.285 | 99.00th=[ 4212], 99.50th=[ 4279], 99.90th=[ 7886], 99.95th=[ 7886], 00:19:21.285 | 99.99th=[ 7886] 00:19:21.285 bw ( KiB/s): min= 2052, max=83968, per=1.53%, avg=46468.15, stdev=28359.24, samples=13 00:19:21.285 iops : min= 2, max= 82, avg=45.31, stdev=27.72, samples=13 00:19:21.285 lat (msec) : 750=1.18%, 1000=2.37%, 2000=41.94%, >=2000=54.50% 00:19:21.285 cpu : usr=0.03%, sys=0.74%, ctx=1181, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.9%, 16=3.8%, 32=7.6%, >=64=85.1% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.285 issued rwts: total=422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493148: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=65, BW=65.2MiB/s (68.4MB/s)(653MiB/10015msec) 00:19:21.285 slat (usec): min=36, max=1764.9k, avg=15311.39, stdev=72973.11 00:19:21.285 clat (msec): min=14, max=3196, avg=1504.07, stdev=869.92 00:19:21.285 lat (msec): min=15, max=3213, avg=1519.38, stdev=874.65 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 23], 5.00th=[ 73], 10.00th=[ 288], 20.00th=[ 810], 00:19:21.285 | 30.00th=[ 953], 40.00th=[ 1150], 50.00th=[ 1435], 60.00th=[ 1754], 00:19:21.285 | 70.00th=[ 1888], 80.00th=[ 2299], 90.00th=[ 2836], 95.00th=[ 2970], 00:19:21.285 | 99.00th=[ 3071], 99.50th=[ 3138], 99.90th=[ 3205], 99.95th=[ 3205], 00:19:21.285 | 99.99th=[ 3205] 00:19:21.285 bw ( KiB/s): min= 4096, max=161792, per=2.27%, avg=68984.54, stdev=41410.21, samples=13 00:19:21.285 iops : min= 4, max= 158, avg=67.31, stdev=40.37, samples=13 00:19:21.285 lat (msec) : 20=0.46%, 50=2.76%, 100=3.52%, 250=2.76%, 500=3.98% 00:19:21.285 lat (msec) : 750=4.13%, 1000=16.23%, 2000=39.51%, >=2000=26.65% 00:19:21.285 cpu : usr=0.03%, sys=0.95%, ctx=1571, majf=0, minf=32769 00:19:21.285 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.5%, 32=4.9%, >=64=90.4% 00:19:21.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.285 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.285 issued rwts: total=653,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.285 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.285 job2: (groupid=0, jobs=1): err= 0: pid=1493149: Mon Jul 15 23:45:08 2024 00:19:21.285 read: IOPS=65, BW=65.5MiB/s (68.6MB/s)(661MiB/10097msec) 00:19:21.285 slat (usec): min=71, max=2018.0k, avg=15142.68, stdev=109654.47 00:19:21.285 clat (msec): min=82, max=3883, avg=1621.05, stdev=1256.01 00:19:21.285 lat (msec): min=100, max=3889, avg=1636.19, stdev=1262.54 00:19:21.285 clat percentiles (msec): 00:19:21.285 | 1.00th=[ 122], 5.00th=[ 284], 10.00th=[ 489], 20.00th=[ 818], 00:19:21.286 | 30.00th=[ 844], 40.00th=[ 852], 50.00th=[ 936], 60.00th=[ 1116], 00:19:21.286 | 70.00th=[ 1854], 80.00th=[ 3507], 90.00th=[ 3675], 95.00th=[ 3742], 00:19:21.286 | 99.00th=[ 3876], 99.50th=[ 3876], 99.90th=[ 3876], 99.95th=[ 3876], 00:19:21.286 | 99.99th=[ 3876] 00:19:21.286 bw ( KiB/s): min= 4096, max=165888, per=3.27%, avg=99248.64, stdev=54111.57, samples=11 00:19:21.286 iops : min= 4, max= 162, avg=96.91, stdev=52.85, samples=11 00:19:21.286 lat (msec) : 100=0.15%, 250=4.24%, 500=6.20%, 750=5.60%, 1000=39.03% 00:19:21.286 lat (msec) : 2000=18.00%, >=2000=26.78% 00:19:21.286 cpu : usr=0.08%, sys=1.92%, ctx=880, majf=0, minf=32769 00:19:21.286 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.286 issued rwts: total=661,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job2: (groupid=0, jobs=1): err= 0: pid=1493150: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=15, BW=15.9MiB/s (16.7MB/s)(204MiB/12817msec) 00:19:21.286 slat (usec): min=41, max=2136.2k, avg=52323.56, stdev=273493.24 00:19:21.286 clat (msec): min=1347, max=11311, avg=7187.59, stdev=4143.43 00:19:21.286 lat (msec): min=1393, max=11313, avg=7239.92, stdev=4129.14 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 1385], 5.00th=[ 1435], 10.00th=[ 1519], 20.00th=[ 1821], 00:19:21.286 | 30.00th=[ 2165], 40.00th=[ 8490], 50.00th=[10268], 60.00th=[10402], 00:19:21.286 | 70.00th=[10402], 80.00th=[10537], 90.00th=[10805], 95.00th=[11073], 00:19:21.286 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:19:21.286 | 99.99th=[11342] 00:19:21.286 bw ( KiB/s): min= 1406, max=86016, per=0.86%, avg=26175.67, stdev=34322.60, samples=6 00:19:21.286 iops : min= 1, max= 84, avg=25.50, stdev=33.57, samples=6 00:19:21.286 lat (msec) : 2000=24.51%, >=2000=75.49% 00:19:21.286 cpu : usr=0.01%, sys=0.57%, ctx=419, majf=0, minf=32769 00:19:21.286 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=3.9%, 16=7.8%, 32=15.7%, >=64=69.1% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.3% 00:19:21.286 issued rwts: total=204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job2: (groupid=0, jobs=1): err= 0: pid=1493151: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=74, BW=74.1MiB/s (77.7MB/s)(743MiB/10031msec) 00:19:21.286 slat (usec): min=54, max=199621, avg=13458.02, stdev=22114.00 00:19:21.286 clat (msec): min=28, max=4220, avg=1638.30, stdev=1216.37 00:19:21.286 lat (msec): min=33, max=4224, avg=1651.76, stdev=1222.57 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 79], 5.00th=[ 309], 10.00th=[ 600], 20.00th=[ 676], 00:19:21.286 | 30.00th=[ 743], 40.00th=[ 810], 50.00th=[ 1234], 60.00th=[ 1368], 00:19:21.286 | 70.00th=[ 2265], 80.00th=[ 2903], 90.00th=[ 3809], 95.00th=[ 4111], 00:19:21.286 | 99.00th=[ 4212], 99.50th=[ 4212], 99.90th=[ 4212], 99.95th=[ 4212], 00:19:21.286 | 99.99th=[ 4212] 00:19:21.286 bw ( KiB/s): min=10240, max=178176, per=2.19%, avg=66395.26, stdev=53921.32, samples=19 00:19:21.286 iops : min= 10, max= 174, avg=64.79, stdev=52.69, samples=19 00:19:21.286 lat (msec) : 50=0.54%, 100=0.94%, 250=2.42%, 500=4.04%, 750=24.09% 00:19:21.286 lat (msec) : 1000=13.73%, 2000=20.59%, >=2000=33.65% 00:19:21.286 cpu : usr=0.05%, sys=1.35%, ctx=2123, majf=0, minf=32769 00:19:21.286 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.3%, >=64=91.5% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.286 issued rwts: total=743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job2: (groupid=0, jobs=1): err= 0: pid=1493152: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=4, BW=4468KiB/s (4575kB/s)(56.0MiB/12834msec) 00:19:21.286 slat (usec): min=496, max=2085.0k, avg=190839.40, stdev=589401.70 00:19:21.286 clat (msec): min=2146, max=12831, avg=9278.02, stdev=3298.85 00:19:21.286 lat (msec): min=4195, max=12833, avg=9468.86, stdev=3185.95 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:21.286 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[10671], 00:19:21.286 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.286 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.286 | 99.99th=[12818] 00:19:21.286 lat (msec) : >=2000=100.00% 00:19:21.286 cpu : usr=0.00%, sys=0.30%, ctx=71, majf=0, minf=14337 00:19:21.286 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.286 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job3: (groupid=0, jobs=1): err= 0: pid=1493153: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=2, BW=2237KiB/s (2291kB/s)(28.0MiB/12816msec) 00:19:21.286 slat (msec): min=5, max=2098, avg=381.22, stdev=796.72 00:19:21.286 clat (msec): min=2141, max=12809, avg=8194.09, stdev=3416.59 00:19:21.286 lat (msec): min=4176, max=12815, avg=8575.31, stdev=3310.05 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4178], 20.00th=[ 4245], 00:19:21.286 | 30.00th=[ 6342], 40.00th=[ 6409], 50.00th=[ 8490], 60.00th=[ 8557], 00:19:21.286 | 70.00th=[10671], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:19:21.286 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.286 | 99.99th=[12818] 00:19:21.286 lat (msec) : >=2000=100.00% 00:19:21.286 cpu : usr=0.00%, sys=0.17%, ctx=72, majf=0, minf=7169 00:19:21.286 IO depths : 1=3.6%, 2=7.1%, 4=14.3%, 8=28.6%, 16=46.4%, 32=0.0%, >=64=0.0% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:21.286 issued rwts: total=28,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job3: (groupid=0, jobs=1): err= 0: pid=1493154: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=33, BW=33.8MiB/s (35.5MB/s)(340MiB/10045msec) 00:19:21.286 slat (usec): min=42, max=2055.9k, avg=29516.14, stdev=182015.48 00:19:21.286 clat (msec): min=8, max=7589, avg=1598.70, stdev=1503.12 00:19:21.286 lat (msec): min=50, max=7635, avg=1628.21, stdev=1540.19 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 53], 5.00th=[ 79], 10.00th=[ 232], 20.00th=[ 409], 00:19:21.286 | 30.00th=[ 827], 40.00th=[ 1217], 50.00th=[ 1469], 60.00th=[ 1636], 00:19:21.286 | 70.00th=[ 1754], 80.00th=[ 1804], 90.00th=[ 1888], 95.00th=[ 5671], 00:19:21.286 | 99.00th=[ 7550], 99.50th=[ 7550], 99.90th=[ 7617], 99.95th=[ 7617], 00:19:21.286 | 99.99th=[ 7617] 00:19:21.286 bw ( KiB/s): min=51200, max=83968, per=2.28%, avg=69120.00, stdev=13520.42, samples=4 00:19:21.286 iops : min= 50, max= 82, avg=67.50, stdev=13.20, samples=4 00:19:21.286 lat (msec) : 10=0.29%, 100=4.71%, 250=6.76%, 500=10.88%, 750=4.71% 00:19:21.286 lat (msec) : 1000=6.76%, 2000=55.88%, >=2000=10.00% 00:19:21.286 cpu : usr=0.01%, sys=0.75%, ctx=1025, majf=0, minf=32769 00:19:21.286 IO depths : 1=0.3%, 2=0.6%, 4=1.2%, 8=2.4%, 16=4.7%, 32=9.4%, >=64=81.5% 00:19:21.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.286 complete : 0=0.0%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.5% 00:19:21.286 issued rwts: total=340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.286 job3: (groupid=0, jobs=1): err= 0: pid=1493155: Mon Jul 15 23:45:08 2024 00:19:21.286 read: IOPS=77, BW=77.2MiB/s (80.9MB/s)(774MiB/10029msec) 00:19:21.286 slat (usec): min=165, max=99890, avg=12919.09, stdev=14746.13 00:19:21.286 clat (msec): min=25, max=2339, avg=1463.01, stdev=508.74 00:19:21.286 lat (msec): min=31, max=2377, avg=1475.93, stdev=510.30 00:19:21.286 clat percentiles (msec): 00:19:21.286 | 1.00th=[ 75], 5.00th=[ 405], 10.00th=[ 885], 20.00th=[ 1167], 00:19:21.286 | 30.00th=[ 1267], 40.00th=[ 1385], 50.00th=[ 1469], 60.00th=[ 1586], 00:19:21.287 | 70.00th=[ 1703], 80.00th=[ 1972], 90.00th=[ 2140], 95.00th=[ 2198], 00:19:21.287 | 99.00th=[ 2265], 99.50th=[ 2265], 99.90th=[ 2333], 99.95th=[ 2333], 00:19:21.287 | 99.99th=[ 2333] 00:19:21.287 bw ( KiB/s): min=43008, max=143360, per=2.73%, avg=82792.50, stdev=30977.08, samples=16 00:19:21.287 iops : min= 42, max= 140, avg=80.75, stdev=30.24, samples=16 00:19:21.287 lat (msec) : 50=0.52%, 100=0.90%, 250=1.81%, 500=2.84%, 750=2.97% 00:19:21.287 lat (msec) : 1000=5.81%, 2000=66.28%, >=2000=18.86% 00:19:21.287 cpu : usr=0.02%, sys=1.29%, ctx=2432, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.287 issued rwts: total=774,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493156: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=70, BW=70.1MiB/s (73.5MB/s)(705MiB/10052msec) 00:19:21.287 slat (usec): min=43, max=113946, avg=14190.82, stdev=18693.13 00:19:21.287 clat (msec): min=43, max=3704, avg=1740.69, stdev=802.44 00:19:21.287 lat (msec): min=56, max=3708, avg=1754.88, stdev=803.23 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 171], 5.00th=[ 919], 10.00th=[ 1020], 20.00th=[ 1116], 00:19:21.287 | 30.00th=[ 1284], 40.00th=[ 1351], 50.00th=[ 1469], 60.00th=[ 1754], 00:19:21.287 | 70.00th=[ 1955], 80.00th=[ 2106], 90.00th=[ 3071], 95.00th=[ 3574], 00:19:21.287 | 99.00th=[ 3675], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:19:21.287 | 99.99th=[ 3708] 00:19:21.287 bw ( KiB/s): min=18432, max=133120, per=2.17%, avg=65760.78, stdev=40208.48, samples=18 00:19:21.287 iops : min= 18, max= 130, avg=64.17, stdev=39.32, samples=18 00:19:21.287 lat (msec) : 50=0.14%, 100=0.43%, 250=1.13%, 500=0.99%, 750=0.85% 00:19:21.287 lat (msec) : 1000=5.39%, 2000=65.39%, >=2000=25.67% 00:19:21.287 cpu : usr=0.02%, sys=1.67%, ctx=1922, majf=0, minf=32107 00:19:21.287 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.287 issued rwts: total=705,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493157: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=79, BW=79.8MiB/s (83.7MB/s)(802MiB/10050msec) 00:19:21.287 slat (usec): min=39, max=2091.6k, avg=12473.91, stdev=124287.02 00:19:21.287 clat (msec): min=41, max=7705, avg=1120.69, stdev=2039.93 00:19:21.287 lat (msec): min=57, max=7711, avg=1133.17, stdev=2053.20 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 73], 5.00th=[ 161], 10.00th=[ 275], 20.00th=[ 347], 00:19:21.287 | 30.00th=[ 359], 40.00th=[ 380], 50.00th=[ 401], 60.00th=[ 422], 00:19:21.287 | 70.00th=[ 518], 80.00th=[ 676], 90.00th=[ 1502], 95.00th=[ 7617], 00:19:21.287 | 99.00th=[ 7684], 99.50th=[ 7684], 99.90th=[ 7684], 99.95th=[ 7684], 00:19:21.287 | 99.99th=[ 7684] 00:19:21.287 bw ( KiB/s): min=22483, max=364544, per=7.59%, avg=230392.50, stdev=133569.68, samples=6 00:19:21.287 iops : min= 21, max= 356, avg=224.83, stdev=130.74, samples=6 00:19:21.287 lat (msec) : 50=0.12%, 100=2.00%, 250=7.11%, 500=60.60%, 750=12.09% 00:19:21.287 lat (msec) : 1000=2.37%, 2000=5.74%, >=2000=9.98% 00:19:21.287 cpu : usr=0.04%, sys=1.45%, ctx=876, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.287 issued rwts: total=802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493158: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=60, BW=60.8MiB/s (63.8MB/s)(610MiB/10029msec) 00:19:21.287 slat (usec): min=72, max=1945.8k, avg=16390.38, stdev=79838.55 00:19:21.287 clat (msec): min=28, max=3951, avg=1552.27, stdev=675.86 00:19:21.287 lat (msec): min=28, max=3957, avg=1568.66, stdev=682.95 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 40], 5.00th=[ 192], 10.00th=[ 542], 20.00th=[ 1200], 00:19:21.287 | 30.00th=[ 1519], 40.00th=[ 1603], 50.00th=[ 1670], 60.00th=[ 1787], 00:19:21.287 | 70.00th=[ 1804], 80.00th=[ 1838], 90.00th=[ 1905], 95.00th=[ 1938], 00:19:21.287 | 99.00th=[ 3910], 99.50th=[ 3943], 99.90th=[ 3943], 99.95th=[ 3943], 00:19:21.287 | 99.99th=[ 3943] 00:19:21.287 bw ( KiB/s): min=41042, max=120832, per=2.51%, avg=76097.38, stdev=19105.40, samples=13 00:19:21.287 iops : min= 40, max= 118, avg=74.31, stdev=18.67, samples=13 00:19:21.287 lat (msec) : 50=2.13%, 100=2.13%, 250=1.48%, 500=3.44%, 750=4.10% 00:19:21.287 lat (msec) : 1000=3.61%, 2000=79.67%, >=2000=3.44% 00:19:21.287 cpu : usr=0.03%, sys=1.06%, ctx=1976, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.7% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.287 issued rwts: total=610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493159: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=4, BW=4855KiB/s (4972kB/s)(61.0MiB/12865msec) 00:19:21.287 slat (usec): min=597, max=2085.2k, avg=175744.72, stdev=550034.74 00:19:21.287 clat (msec): min=2143, max=12862, avg=10784.08, stdev=3227.02 00:19:21.287 lat (msec): min=4181, max=12864, avg=10959.82, stdev=3034.82 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:21.287 | 30.00th=[10671], 40.00th=[12416], 50.00th=[12550], 60.00th=[12818], 00:19:21.287 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:21.287 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.287 | 99.99th=[12818] 00:19:21.287 lat (msec) : >=2000=100.00% 00:19:21.287 cpu : usr=0.00%, sys=0.38%, ctx=142, majf=0, minf=15617 00:19:21.287 IO depths : 1=1.6%, 2=3.3%, 4=6.6%, 8=13.1%, 16=26.2%, 32=49.2%, >=64=0.0% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.287 issued rwts: total=61,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493160: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=62, BW=62.6MiB/s (65.7MB/s)(629MiB/10043msec) 00:19:21.287 slat (usec): min=59, max=1921.7k, avg=15896.35, stdev=77984.73 00:19:21.287 clat (msec): min=41, max=5632, avg=1714.45, stdev=773.71 00:19:21.287 lat (msec): min=58, max=5635, avg=1730.34, stdev=783.92 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 161], 5.00th=[ 584], 10.00th=[ 1036], 20.00th=[ 1133], 00:19:21.287 | 30.00th=[ 1401], 40.00th=[ 1452], 50.00th=[ 1552], 60.00th=[ 1636], 00:19:21.287 | 70.00th=[ 2022], 80.00th=[ 2232], 90.00th=[ 2467], 95.00th=[ 3171], 00:19:21.287 | 99.00th=[ 3742], 99.50th=[ 5604], 99.90th=[ 5604], 99.95th=[ 5604], 00:19:21.287 | 99.99th=[ 5604] 00:19:21.287 bw ( KiB/s): min=28672, max=110592, per=2.42%, avg=73426.93, stdev=24710.29, samples=14 00:19:21.287 iops : min= 28, max= 108, avg=71.64, stdev=24.17, samples=14 00:19:21.287 lat (msec) : 50=0.16%, 100=0.32%, 250=1.27%, 500=2.07%, 750=2.86% 00:19:21.287 lat (msec) : 1000=1.43%, 2000=61.05%, >=2000=30.84% 00:19:21.287 cpu : usr=0.02%, sys=1.19%, ctx=2155, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.5%, 32=5.1%, >=64=90.0% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.287 issued rwts: total=629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493161: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=23, BW=23.4MiB/s (24.5MB/s)(235MiB/10049msec) 00:19:21.287 slat (usec): min=1092, max=2085.1k, avg=42565.01, stdev=230606.85 00:19:21.287 clat (msec): min=44, max=8613, avg=2043.27, stdev=2047.95 00:19:21.287 lat (msec): min=57, max=8647, avg=2085.83, stdev=2096.12 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 66], 5.00th=[ 153], 10.00th=[ 296], 20.00th=[ 518], 00:19:21.287 | 30.00th=[ 885], 40.00th=[ 1351], 50.00th=[ 1905], 60.00th=[ 2198], 00:19:21.287 | 70.00th=[ 2232], 80.00th=[ 2299], 90.00th=[ 2467], 95.00th=[ 8423], 00:19:21.287 | 99.00th=[ 8658], 99.50th=[ 8658], 99.90th=[ 8658], 99.95th=[ 8658], 00:19:21.287 | 99.99th=[ 8658] 00:19:21.287 bw ( KiB/s): min=14336, max=100352, per=1.82%, avg=55296.00, stdev=36597.56, samples=4 00:19:21.287 iops : min= 14, max= 98, avg=54.00, stdev=35.74, samples=4 00:19:21.287 lat (msec) : 50=0.43%, 100=2.13%, 250=5.96%, 500=10.21%, 750=6.38% 00:19:21.287 lat (msec) : 1000=7.66%, 2000=18.72%, >=2000=48.51% 00:19:21.287 cpu : usr=0.00%, sys=0.74%, ctx=949, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.287 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:19:21.287 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.287 job3: (groupid=0, jobs=1): err= 0: pid=1493162: Mon Jul 15 23:45:08 2024 00:19:21.287 read: IOPS=67, BW=67.9MiB/s (71.2MB/s)(681MiB/10027msec) 00:19:21.287 slat (usec): min=43, max=2045.0k, avg=14683.12, stdev=109254.84 00:19:21.287 clat (msec): min=25, max=3721, avg=1355.61, stdev=1044.90 00:19:21.287 lat (msec): min=32, max=3724, avg=1370.30, stdev=1050.11 00:19:21.287 clat percentiles (msec): 00:19:21.287 | 1.00th=[ 85], 5.00th=[ 380], 10.00th=[ 443], 20.00th=[ 502], 00:19:21.287 | 30.00th=[ 609], 40.00th=[ 693], 50.00th=[ 1150], 60.00th=[ 1318], 00:19:21.287 | 70.00th=[ 1452], 80.00th=[ 1586], 90.00th=[ 3440], 95.00th=[ 3675], 00:19:21.287 | 99.00th=[ 3708], 99.50th=[ 3708], 99.90th=[ 3708], 99.95th=[ 3708], 00:19:21.287 | 99.99th=[ 3708] 00:19:21.287 bw ( KiB/s): min=24576, max=280576, per=3.40%, avg=103144.73, stdev=64836.41, samples=11 00:19:21.287 iops : min= 24, max= 274, avg=100.73, stdev=63.32, samples=11 00:19:21.287 lat (msec) : 50=0.59%, 100=0.59%, 250=1.91%, 500=16.89%, 750=21.73% 00:19:21.287 lat (msec) : 1000=4.11%, 2000=34.95%, >=2000=19.24% 00:19:21.287 cpu : usr=0.00%, sys=1.03%, ctx=1680, majf=0, minf=32769 00:19:21.287 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.7% 00:19:21.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.288 issued rwts: total=681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job3: (groupid=0, jobs=1): err= 0: pid=1493163: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=110, BW=111MiB/s (116MB/s)(1113MiB/10045msec) 00:19:21.288 slat (usec): min=38, max=1943.4k, avg=8982.38, stdev=59133.83 00:19:21.288 clat (msec): min=42, max=2982, avg=882.25, stdev=368.77 00:19:21.288 lat (msec): min=48, max=3005, avg=891.23, stdev=375.23 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 99], 5.00th=[ 510], 10.00th=[ 542], 20.00th=[ 642], 00:19:21.288 | 30.00th=[ 684], 40.00th=[ 760], 50.00th=[ 835], 60.00th=[ 877], 00:19:21.288 | 70.00th=[ 936], 80.00th=[ 1167], 90.00th=[ 1435], 95.00th=[ 1469], 00:19:21.288 | 99.00th=[ 1519], 99.50th=[ 2937], 99.90th=[ 2970], 99.95th=[ 2970], 00:19:21.288 | 99.99th=[ 2970] 00:19:21.288 bw ( KiB/s): min=10240, max=256000, per=4.43%, avg=134535.07, stdev=59579.01, samples=15 00:19:21.288 iops : min= 10, max= 250, avg=131.27, stdev=58.17, samples=15 00:19:21.288 lat (msec) : 50=0.18%, 100=0.90%, 250=1.53%, 500=2.34%, 750=34.14% 00:19:21.288 lat (msec) : 1000=37.11%, 2000=22.82%, >=2000=0.99% 00:19:21.288 cpu : usr=0.03%, sys=1.35%, ctx=1658, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.9%, >=64=94.3% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.288 issued rwts: total=1113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job3: (groupid=0, jobs=1): err= 0: pid=1493164: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=40, BW=41.0MiB/s (43.0MB/s)(414MiB/10100msec) 00:19:21.288 slat (usec): min=50, max=1941.5k, avg=24180.90, stdev=125020.95 00:19:21.288 clat (msec): min=86, max=6869, avg=2516.21, stdev=2208.19 00:19:21.288 lat (msec): min=103, max=6879, avg=2540.39, stdev=2221.41 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 157], 5.00th=[ 376], 10.00th=[ 550], 20.00th=[ 793], 00:19:21.288 | 30.00th=[ 911], 40.00th=[ 995], 50.00th=[ 1267], 60.00th=[ 1905], 00:19:21.288 | 70.00th=[ 4329], 80.00th=[ 5201], 90.00th=[ 6544], 95.00th=[ 6812], 00:19:21.288 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6879], 99.95th=[ 6879], 00:19:21.288 | 99.99th=[ 6879] 00:19:21.288 bw ( KiB/s): min= 6144, max=129024, per=1.93%, avg=58563.30, stdev=48314.87, samples=10 00:19:21.288 iops : min= 6, max= 126, avg=57.10, stdev=47.21, samples=10 00:19:21.288 lat (msec) : 100=0.24%, 250=2.90%, 500=6.76%, 750=7.49%, 1000=22.95% 00:19:21.288 lat (msec) : 2000=21.26%, >=2000=38.41% 00:19:21.288 cpu : usr=0.00%, sys=1.07%, ctx=1136, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.2%, 2=0.5%, 4=1.0%, 8=1.9%, 16=3.9%, 32=7.7%, >=64=84.8% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.288 issued rwts: total=414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job3: (groupid=0, jobs=1): err= 0: pid=1493165: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=79, BW=79.3MiB/s (83.1MB/s)(796MiB/10044msec) 00:19:21.288 slat (usec): min=60, max=1202.9k, avg=12573.43, stdev=44127.56 00:19:21.288 clat (msec): min=32, max=5875, avg=1511.47, stdev=879.26 00:19:21.288 lat (msec): min=53, max=5886, avg=1524.05, stdev=885.81 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 90], 5.00th=[ 405], 10.00th=[ 726], 20.00th=[ 743], 00:19:21.288 | 30.00th=[ 885], 40.00th=[ 1133], 50.00th=[ 1418], 60.00th=[ 1603], 00:19:21.288 | 70.00th=[ 1854], 80.00th=[ 2005], 90.00th=[ 2836], 95.00th=[ 2903], 00:19:21.288 | 99.00th=[ 4866], 99.50th=[ 5873], 99.90th=[ 5873], 99.95th=[ 5873], 00:19:21.288 | 99.99th=[ 5873] 00:19:21.288 bw ( KiB/s): min=14307, max=161792, per=2.65%, avg=80472.65, stdev=36844.24, samples=17 00:19:21.288 iops : min= 13, max= 158, avg=78.53, stdev=36.09, samples=17 00:19:21.288 lat (msec) : 50=0.13%, 100=1.13%, 250=1.88%, 500=3.02%, 750=17.96% 00:19:21.288 lat (msec) : 1000=9.42%, 2000=46.11%, >=2000=20.35% 00:19:21.288 cpu : usr=0.02%, sys=1.38%, ctx=1927, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.0%, 32=4.0%, >=64=92.1% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.288 issued rwts: total=796,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job4: (groupid=0, jobs=1): err= 0: pid=1493166: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=99, BW=99.5MiB/s (104MB/s)(1276MiB/12820msec) 00:19:21.288 slat (usec): min=35, max=2004.5k, avg=8367.26, stdev=56635.20 00:19:21.288 clat (msec): min=384, max=5011, avg=1202.31, stdev=1146.91 00:19:21.288 lat (msec): min=386, max=5016, avg=1210.68, stdev=1149.93 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 388], 5.00th=[ 393], 10.00th=[ 422], 20.00th=[ 609], 00:19:21.288 | 30.00th=[ 735], 40.00th=[ 802], 50.00th=[ 885], 60.00th=[ 995], 00:19:21.288 | 70.00th=[ 1070], 80.00th=[ 1116], 90.00th=[ 2140], 95.00th=[ 4530], 00:19:21.288 | 99.00th=[ 4933], 99.50th=[ 5000], 99.90th=[ 5000], 99.95th=[ 5000], 00:19:21.288 | 99.99th=[ 5000] 00:19:21.288 bw ( KiB/s): min= 1381, max=321536, per=4.56%, avg=138368.24, stdev=74460.08, samples=17 00:19:21.288 iops : min= 1, max= 314, avg=135.06, stdev=72.75, samples=17 00:19:21.288 lat (msec) : 500=11.91%, 750=20.06%, 1000=28.92%, 2000=29.08%, >=2000=10.03% 00:19:21.288 cpu : usr=0.01%, sys=1.05%, ctx=3259, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.288 issued rwts: total=1276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job4: (groupid=0, jobs=1): err= 0: pid=1493167: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=23, BW=24.0MiB/s (25.1MB/s)(307MiB/12804msec) 00:19:21.288 slat (usec): min=444, max=2056.2k, avg=34726.26, stdev=218596.25 00:19:21.288 clat (msec): min=943, max=7450, avg=2881.79, stdev=1749.10 00:19:21.288 lat (msec): min=947, max=8278, avg=2916.52, stdev=1783.09 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 953], 5.00th=[ 1020], 10.00th=[ 1053], 20.00th=[ 1234], 00:19:21.288 | 30.00th=[ 1418], 40.00th=[ 1485], 50.00th=[ 1569], 60.00th=[ 3910], 00:19:21.288 | 70.00th=[ 4329], 80.00th=[ 4530], 90.00th=[ 4665], 95.00th=[ 5336], 00:19:21.288 | 99.00th=[ 7349], 99.50th=[ 7416], 99.90th=[ 7483], 99.95th=[ 7483], 00:19:21.288 | 99.99th=[ 7483] 00:19:21.288 bw ( KiB/s): min= 1410, max=184320, per=2.02%, avg=61333.67, stdev=67999.46, samples=6 00:19:21.288 iops : min= 1, max= 180, avg=59.83, stdev=66.47, samples=6 00:19:21.288 lat (msec) : 1000=3.91%, 2000=46.58%, >=2000=49.51% 00:19:21.288 cpu : usr=0.00%, sys=0.57%, ctx=1171, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.6%, 16=5.2%, 32=10.4%, >=64=79.5% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:21.288 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job4: (groupid=0, jobs=1): err= 0: pid=1493168: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=71, BW=71.3MiB/s (74.8MB/s)(915MiB/12827msec) 00:19:21.288 slat (usec): min=56, max=2143.1k, avg=11675.78, stdev=71938.59 00:19:21.288 clat (msec): min=435, max=4838, avg=1677.79, stdev=1214.20 00:19:21.288 lat (msec): min=441, max=4841, avg=1689.47, stdev=1215.46 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 489], 5.00th=[ 659], 10.00th=[ 793], 20.00th=[ 953], 00:19:21.288 | 30.00th=[ 1053], 40.00th=[ 1167], 50.00th=[ 1234], 60.00th=[ 1385], 00:19:21.288 | 70.00th=[ 1502], 80.00th=[ 1921], 90.00th=[ 4530], 95.00th=[ 4665], 00:19:21.288 | 99.00th=[ 4799], 99.50th=[ 4799], 99.90th=[ 4866], 99.95th=[ 4866], 00:19:21.288 | 99.99th=[ 4866] 00:19:21.288 bw ( KiB/s): min= 2048, max=231424, per=3.13%, avg=94926.94, stdev=56904.64, samples=17 00:19:21.288 iops : min= 2, max= 226, avg=92.65, stdev=55.58, samples=17 00:19:21.288 lat (msec) : 500=1.09%, 750=6.01%, 1000=18.03%, 2000=58.36%, >=2000=16.50% 00:19:21.288 cpu : usr=0.02%, sys=1.19%, ctx=2412, majf=0, minf=32769 00:19:21.288 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.7%, 32=3.5%, >=64=93.1% 00:19:21.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.288 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.288 issued rwts: total=915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.288 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.288 job4: (groupid=0, jobs=1): err= 0: pid=1493169: Mon Jul 15 23:45:08 2024 00:19:21.288 read: IOPS=4, BW=4242KiB/s (4344kB/s)(53.0MiB/12794msec) 00:19:21.288 slat (usec): min=535, max=2075.7k, avg=200937.90, stdev=600283.59 00:19:21.288 clat (msec): min=2143, max=12721, avg=8652.69, stdev=2881.73 00:19:21.288 lat (msec): min=4173, max=12793, avg=8853.63, stdev=2788.95 00:19:21.288 clat percentiles (msec): 00:19:21.288 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:21.288 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[ 8557], 60.00th=[ 8557], 00:19:21.288 | 70.00th=[10671], 80.00th=[10671], 90.00th=[12684], 95.00th=[12684], 00:19:21.288 | 99.00th=[12684], 99.50th=[12684], 99.90th=[12684], 99.95th=[12684], 00:19:21.288 | 99.99th=[12684] 00:19:21.288 lat (msec) : >=2000=100.00% 00:19:21.288 cpu : usr=0.00%, sys=0.27%, ctx=71, majf=0, minf=13569 00:19:21.289 IO depths : 1=1.9%, 2=3.8%, 4=7.5%, 8=15.1%, 16=30.2%, 32=41.5%, >=64=0.0% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.289 issued rwts: total=53,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493170: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=37, BW=37.2MiB/s (39.0MB/s)(476MiB/12782msec) 00:19:21.289 slat (usec): min=44, max=2088.2k, avg=22347.36, stdev=167672.13 00:19:21.289 clat (msec): min=645, max=6120, avg=2145.25, stdev=2155.05 00:19:21.289 lat (msec): min=646, max=6135, avg=2167.59, stdev=2163.99 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 651], 5.00th=[ 651], 10.00th=[ 684], 20.00th=[ 751], 00:19:21.289 | 30.00th=[ 810], 40.00th=[ 835], 50.00th=[ 869], 60.00th=[ 919], 00:19:21.289 | 70.00th=[ 1045], 80.00th=[ 5403], 90.00th=[ 5873], 95.00th=[ 5940], 00:19:21.289 | 99.00th=[ 6074], 99.50th=[ 6141], 99.90th=[ 6141], 99.95th=[ 6141], 00:19:21.289 | 99.99th=[ 6141] 00:19:21.289 bw ( KiB/s): min= 1450, max=210944, per=2.94%, avg=89269.25, stdev=84216.82, samples=8 00:19:21.289 iops : min= 1, max= 206, avg=87.12, stdev=82.31, samples=8 00:19:21.289 lat (msec) : 750=19.96%, 1000=48.95%, 2000=3.15%, >=2000=27.94% 00:19:21.289 cpu : usr=0.01%, sys=0.67%, ctx=1091, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.2%, 2=0.4%, 4=0.8%, 8=1.7%, 16=3.4%, 32=6.7%, >=64=86.8% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.289 issued rwts: total=476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493171: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=46, BW=46.3MiB/s (48.5MB/s)(594MiB/12836msec) 00:19:21.289 slat (usec): min=155, max=2145.2k, avg=17999.72, stdev=142013.00 00:19:21.289 clat (msec): min=774, max=6845, avg=2659.89, stdev=2110.56 00:19:21.289 lat (msec): min=776, max=6851, avg=2677.89, stdev=2113.90 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 776], 5.00th=[ 827], 10.00th=[ 894], 20.00th=[ 1011], 00:19:21.289 | 30.00th=[ 1070], 40.00th=[ 1150], 50.00th=[ 1183], 60.00th=[ 3306], 00:19:21.289 | 70.00th=[ 3339], 80.00th=[ 6007], 90.00th=[ 6275], 95.00th=[ 6611], 00:19:21.289 | 99.00th=[ 6812], 99.50th=[ 6812], 99.90th=[ 6812], 99.95th=[ 6812], 00:19:21.289 | 99.99th=[ 6812] 00:19:21.289 bw ( KiB/s): min= 2048, max=151552, per=2.62%, avg=79663.75, stdev=51956.15, samples=12 00:19:21.289 iops : min= 2, max= 148, avg=77.67, stdev=50.65, samples=12 00:19:21.289 lat (msec) : 1000=17.17%, 2000=39.73%, >=2000=43.10% 00:19:21.289 cpu : usr=0.02%, sys=0.80%, ctx=1625, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.3%, 16=2.7%, 32=5.4%, >=64=89.4% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.289 issued rwts: total=594,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493172: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=44, BW=44.3MiB/s (46.5MB/s)(568MiB/12808msec) 00:19:21.289 slat (usec): min=41, max=2105.2k, avg=18779.46, stdev=150490.33 00:19:21.289 clat (msec): min=450, max=5341, avg=1783.97, stdev=1629.58 00:19:21.289 lat (msec): min=450, max=5443, avg=1802.75, stdev=1639.21 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 464], 5.00th=[ 514], 10.00th=[ 523], 20.00th=[ 575], 00:19:21.289 | 30.00th=[ 634], 40.00th=[ 953], 50.00th=[ 1045], 60.00th=[ 1301], 00:19:21.289 | 70.00th=[ 1469], 80.00th=[ 4329], 90.00th=[ 4799], 95.00th=[ 4933], 00:19:21.289 | 99.00th=[ 5000], 99.50th=[ 5000], 99.90th=[ 5336], 99.95th=[ 5336], 00:19:21.289 | 99.99th=[ 5336] 00:19:21.289 bw ( KiB/s): min= 1410, max=249856, per=3.30%, avg=100281.11, stdev=93307.68, samples=9 00:19:21.289 iops : min= 1, max= 244, avg=97.89, stdev=91.17, samples=9 00:19:21.289 lat (msec) : 500=2.82%, 750=35.04%, 1000=9.15%, 2000=29.40%, >=2000=23.59% 00:19:21.289 cpu : usr=0.02%, sys=0.63%, ctx=1981, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.8%, 32=5.6%, >=64=88.9% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.289 issued rwts: total=568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493173: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=93, BW=93.7MiB/s (98.2MB/s)(1005MiB/10728msec) 00:19:21.289 slat (usec): min=45, max=2150.9k, avg=10605.25, stdev=94175.92 00:19:21.289 clat (msec): min=66, max=3685, avg=1308.03, stdev=1063.70 00:19:21.289 lat (msec): min=370, max=3724, avg=1318.64, stdev=1066.45 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 372], 5.00th=[ 372], 10.00th=[ 376], 20.00th=[ 426], 00:19:21.289 | 30.00th=[ 527], 40.00th=[ 667], 50.00th=[ 827], 60.00th=[ 1083], 00:19:21.289 | 70.00th=[ 1318], 80.00th=[ 2635], 90.00th=[ 3171], 95.00th=[ 3440], 00:19:21.289 | 99.00th=[ 3641], 99.50th=[ 3675], 99.90th=[ 3675], 99.95th=[ 3675], 00:19:21.289 | 99.99th=[ 3675] 00:19:21.289 bw ( KiB/s): min= 4096, max=314762, per=4.55%, avg=138112.77, stdev=90526.23, samples=13 00:19:21.289 iops : min= 4, max= 307, avg=134.85, stdev=88.34, samples=13 00:19:21.289 lat (msec) : 100=0.10%, 500=26.27%, 750=20.40%, 1000=9.55%, 2000=18.41% 00:19:21.289 lat (msec) : >=2000=25.27% 00:19:21.289 cpu : usr=0.07%, sys=1.05%, ctx=2512, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.2%, >=64=93.7% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.289 issued rwts: total=1005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493174: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=55, BW=55.3MiB/s (58.0MB/s)(555MiB/10037msec) 00:19:21.289 slat (usec): min=46, max=2075.3k, avg=18020.53, stdev=123907.62 00:19:21.289 clat (msec): min=33, max=7096, avg=997.20, stdev=991.06 00:19:21.289 lat (msec): min=38, max=7187, avg=1015.22, stdev=1029.68 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 45], 5.00th=[ 115], 10.00th=[ 203], 20.00th=[ 359], 00:19:21.289 | 30.00th=[ 372], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 542], 00:19:21.289 | 70.00th=[ 1385], 80.00th=[ 2140], 90.00th=[ 2433], 95.00th=[ 2903], 00:19:21.289 | 99.00th=[ 3037], 99.50th=[ 5134], 99.90th=[ 7080], 99.95th=[ 7080], 00:19:21.289 | 99.99th=[ 7080] 00:19:21.289 bw ( KiB/s): min=34816, max=346112, per=4.81%, avg=146090.67, stdev=144871.45, samples=6 00:19:21.289 iops : min= 34, max= 338, avg=142.67, stdev=141.48, samples=6 00:19:21.289 lat (msec) : 50=1.08%, 100=3.06%, 250=9.01%, 500=46.31%, 750=2.34% 00:19:21.289 lat (msec) : 1000=2.70%, 2000=13.69%, >=2000=21.80% 00:19:21.289 cpu : usr=0.02%, sys=1.06%, ctx=1664, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.4%, 16=2.9%, 32=5.8%, >=64=88.6% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.289 issued rwts: total=555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493175: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=100, BW=101MiB/s (105MB/s)(1292MiB/12854msec) 00:19:21.289 slat (usec): min=39, max=2124.9k, avg=8284.74, stdev=61852.27 00:19:21.289 clat (msec): min=380, max=4920, avg=1200.71, stdev=1195.90 00:19:21.289 lat (msec): min=381, max=4924, avg=1208.99, stdev=1199.13 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 384], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 435], 00:19:21.289 | 30.00th=[ 506], 40.00th=[ 651], 50.00th=[ 768], 60.00th=[ 869], 00:19:21.289 | 70.00th=[ 1070], 80.00th=[ 1452], 90.00th=[ 2433], 95.00th=[ 4530], 00:19:21.289 | 99.00th=[ 4799], 99.50th=[ 4866], 99.90th=[ 4933], 99.95th=[ 4933], 00:19:21.289 | 99.99th=[ 4933] 00:19:21.289 bw ( KiB/s): min= 2048, max=309248, per=4.62%, avg=140348.24, stdev=103972.86, samples=17 00:19:21.289 iops : min= 2, max= 302, avg=137.06, stdev=101.54, samples=17 00:19:21.289 lat (msec) : 500=29.18%, 750=20.12%, 1000=17.26%, 2000=19.12%, >=2000=14.32% 00:19:21.289 cpu : usr=0.05%, sys=1.26%, ctx=2680, majf=0, minf=32769 00:19:21.289 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.289 issued rwts: total=1292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.289 job4: (groupid=0, jobs=1): err= 0: pid=1493176: Mon Jul 15 23:45:08 2024 00:19:21.289 read: IOPS=6, BW=6627KiB/s (6786kB/s)(83.0MiB/12826msec) 00:19:21.289 slat (usec): min=489, max=2126.0k, avg=128686.88, stdev=482341.41 00:19:21.289 clat (msec): min=2144, max=12824, avg=10108.66, stdev=2716.47 00:19:21.289 lat (msec): min=4190, max=12825, avg=10237.35, stdev=2584.35 00:19:21.289 clat percentiles (msec): 00:19:21.289 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 8356], 20.00th=[ 8423], 00:19:21.289 | 30.00th=[ 8423], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12550], 00:19:21.289 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12818], 00:19:21.289 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:21.289 | 99.99th=[12818] 00:19:21.289 lat (msec) : >=2000=100.00% 00:19:21.289 cpu : usr=0.01%, sys=0.46%, ctx=95, majf=0, minf=21249 00:19:21.289 IO depths : 1=1.2%, 2=2.4%, 4=4.8%, 8=9.6%, 16=19.3%, 32=38.6%, >=64=24.1% 00:19:21.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.289 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:21.290 issued rwts: total=83,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job4: (groupid=0, jobs=1): err= 0: pid=1493177: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=77, BW=77.8MiB/s (81.6MB/s)(833MiB/10704msec) 00:19:21.290 slat (usec): min=40, max=1438.1k, avg=12770.33, stdev=74037.23 00:19:21.290 clat (msec): min=63, max=4249, avg=1361.78, stdev=517.50 00:19:21.290 lat (msec): min=730, max=4284, avg=1374.55, stdev=524.10 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 743], 5.00th=[ 768], 10.00th=[ 785], 20.00th=[ 810], 00:19:21.290 | 30.00th=[ 902], 40.00th=[ 1083], 50.00th=[ 1435], 60.00th=[ 1603], 00:19:21.290 | 70.00th=[ 1636], 80.00th=[ 1720], 90.00th=[ 2198], 95.00th=[ 2333], 00:19:21.290 | 99.00th=[ 2567], 99.50th=[ 2635], 99.90th=[ 4245], 99.95th=[ 4245], 00:19:21.290 | 99.99th=[ 4245] 00:19:21.290 bw ( KiB/s): min=36864, max=163840, per=3.66%, avg=111089.54, stdev=44408.11, samples=13 00:19:21.290 iops : min= 36, max= 160, avg=108.46, stdev=43.34, samples=13 00:19:21.290 lat (msec) : 100=0.12%, 750=1.32%, 1000=32.89%, 2000=54.62%, >=2000=11.04% 00:19:21.290 cpu : usr=0.02%, sys=1.07%, ctx=2167, majf=0, minf=32769 00:19:21.290 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.8%, >=64=92.4% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.290 issued rwts: total=833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job4: (groupid=0, jobs=1): err= 0: pid=1493178: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=64, BW=64.3MiB/s (67.4MB/s)(689MiB/10719msec) 00:19:21.290 slat (usec): min=30, max=2059.8k, avg=15448.68, stdev=130001.90 00:19:21.290 clat (msec): min=72, max=5643, avg=1089.43, stdev=781.79 00:19:21.290 lat (msec): min=390, max=5731, avg=1104.88, stdev=801.67 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 388], 5.00th=[ 447], 10.00th=[ 468], 20.00th=[ 485], 00:19:21.290 | 30.00th=[ 514], 40.00th=[ 542], 50.00th=[ 667], 60.00th=[ 1133], 00:19:21.290 | 70.00th=[ 1385], 80.00th=[ 1536], 90.00th=[ 2366], 95.00th=[ 2500], 00:19:21.290 | 99.00th=[ 2635], 99.50th=[ 3608], 99.90th=[ 5671], 99.95th=[ 5671], 00:19:21.290 | 99.99th=[ 5671] 00:19:21.290 bw ( KiB/s): min=55296, max=272384, per=5.40%, avg=164088.14, stdev=95300.37, samples=7 00:19:21.290 iops : min= 54, max= 266, avg=160.14, stdev=93.08, samples=7 00:19:21.290 lat (msec) : 100=0.15%, 500=25.25%, 750=27.29%, 1000=5.37%, 2000=24.24% 00:19:21.290 lat (msec) : >=2000=17.71% 00:19:21.290 cpu : usr=0.00%, sys=0.91%, ctx=2187, majf=0, minf=32769 00:19:21.290 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.290 issued rwts: total=689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job5: (groupid=0, jobs=1): err= 0: pid=1493179: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=164, BW=165MiB/s (173MB/s)(1784MiB/10821msec) 00:19:21.290 slat (usec): min=39, max=2016.2k, avg=6015.79, stdev=64568.46 00:19:21.290 clat (msec): min=80, max=4536, avg=743.70, stdev=980.37 00:19:21.290 lat (msec): min=256, max=4538, avg=749.72, stdev=983.82 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 257], 5.00th=[ 259], 10.00th=[ 259], 20.00th=[ 262], 00:19:21.290 | 30.00th=[ 264], 40.00th=[ 266], 50.00th=[ 443], 60.00th=[ 584], 00:19:21.290 | 70.00th=[ 667], 80.00th=[ 827], 90.00th=[ 919], 95.00th=[ 4329], 00:19:21.290 | 99.00th=[ 4530], 99.50th=[ 4530], 99.90th=[ 4530], 99.95th=[ 4530], 00:19:21.290 | 99.99th=[ 4530] 00:19:21.290 bw ( KiB/s): min=20439, max=499712, per=7.98%, avg=242202.36, stdev=173008.72, samples=14 00:19:21.290 iops : min= 19, max= 488, avg=236.36, stdev=169.10, samples=14 00:19:21.290 lat (msec) : 100=0.06%, 500=51.51%, 750=21.36%, 1000=18.50%, 2000=0.73% 00:19:21.290 lat (msec) : >=2000=7.85% 00:19:21.290 cpu : usr=0.09%, sys=1.97%, ctx=2165, majf=0, minf=32769 00:19:21.290 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.8%, >=64=96.5% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.290 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job5: (groupid=0, jobs=1): err= 0: pid=1493180: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=10, BW=10.1MiB/s (10.5MB/s)(108MiB/10739msec) 00:19:21.290 slat (usec): min=365, max=2052.2k, avg=98721.34, stdev=393252.46 00:19:21.290 clat (msec): min=76, max=10718, avg=8751.16, stdev=2432.14 00:19:21.290 lat (msec): min=2117, max=10738, avg=8849.88, stdev=2288.90 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 2123], 5.00th=[ 2198], 10.00th=[ 4329], 20.00th=[ 6342], 00:19:21.290 | 30.00th=[ 8557], 40.00th=[ 9866], 50.00th=[ 9866], 60.00th=[10000], 00:19:21.290 | 70.00th=[10134], 80.00th=[10268], 90.00th=[10537], 95.00th=[10537], 00:19:21.290 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:21.290 | 99.99th=[10671] 00:19:21.290 lat (msec) : 100=0.93%, >=2000=99.07% 00:19:21.290 cpu : usr=0.01%, sys=0.51%, ctx=272, majf=0, minf=27649 00:19:21.290 IO depths : 1=0.9%, 2=1.9%, 4=3.7%, 8=7.4%, 16=14.8%, 32=29.6%, >=64=41.7% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:21.290 issued rwts: total=108,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job5: (groupid=0, jobs=1): err= 0: pid=1493181: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=71, BW=72.0MiB/s (75.4MB/s)(777MiB/10799msec) 00:19:21.290 slat (usec): min=35, max=1933.5k, avg=13818.29, stdev=108371.26 00:19:21.290 clat (msec): min=58, max=4888, avg=1496.26, stdev=1318.18 00:19:21.290 lat (msec): min=397, max=4892, avg=1510.08, stdev=1321.69 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 414], 5.00th=[ 477], 10.00th=[ 535], 20.00th=[ 667], 00:19:21.290 | 30.00th=[ 735], 40.00th=[ 844], 50.00th=[ 869], 60.00th=[ 877], 00:19:21.290 | 70.00th=[ 1234], 80.00th=[ 2903], 90.00th=[ 3641], 95.00th=[ 4799], 00:19:21.290 | 99.00th=[ 4866], 99.50th=[ 4866], 99.90th=[ 4866], 99.95th=[ 4866], 00:19:21.290 | 99.99th=[ 4866] 00:19:21.290 bw ( KiB/s): min=12288, max=288768, per=3.98%, avg=120807.00, stdev=92096.56, samples=11 00:19:21.290 iops : min= 12, max= 282, avg=117.82, stdev=90.04, samples=11 00:19:21.290 lat (msec) : 100=0.13%, 500=7.72%, 750=24.07%, 1000=36.04%, 2000=9.01% 00:19:21.290 lat (msec) : >=2000=23.04% 00:19:21.290 cpu : usr=0.03%, sys=1.28%, ctx=842, majf=0, minf=32769 00:19:21.290 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.1%, >=64=91.9% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.290 issued rwts: total=777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job5: (groupid=0, jobs=1): err= 0: pid=1493182: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=111, BW=111MiB/s (117MB/s)(1123MiB/10083msec) 00:19:21.290 slat (usec): min=39, max=2107.3k, avg=8898.04, stdev=63784.22 00:19:21.290 clat (msec): min=82, max=7035, avg=1053.98, stdev=739.17 00:19:21.290 lat (msec): min=83, max=7066, avg=1062.88, stdev=742.09 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 161], 5.00th=[ 430], 10.00th=[ 659], 20.00th=[ 684], 00:19:21.290 | 30.00th=[ 751], 40.00th=[ 802], 50.00th=[ 835], 60.00th=[ 877], 00:19:21.290 | 70.00th=[ 953], 80.00th=[ 1062], 90.00th=[ 2937], 95.00th=[ 2970], 00:19:21.290 | 99.00th=[ 3037], 99.50th=[ 3071], 99.90th=[ 3104], 99.95th=[ 7013], 00:19:21.290 | 99.99th=[ 7013] 00:19:21.290 bw ( KiB/s): min=36864, max=196608, per=4.48%, avg=135971.33, stdev=41119.01, samples=15 00:19:21.290 iops : min= 36, max= 192, avg=132.73, stdev=40.18, samples=15 00:19:21.290 lat (msec) : 100=0.89%, 250=1.42%, 500=4.01%, 750=24.22%, 1000=42.12% 00:19:21.290 lat (msec) : 2000=15.94%, >=2000=11.40% 00:19:21.290 cpu : usr=0.05%, sys=1.97%, ctx=1300, majf=0, minf=32769 00:19:21.290 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.4%, 32=2.8%, >=64=94.4% 00:19:21.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.290 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.290 issued rwts: total=1123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.290 job5: (groupid=0, jobs=1): err= 0: pid=1493183: Mon Jul 15 23:45:08 2024 00:19:21.290 read: IOPS=64, BW=64.3MiB/s (67.4MB/s)(683MiB/10630msec) 00:19:21.290 slat (usec): min=32, max=1938.7k, avg=15555.69, stdev=113789.75 00:19:21.290 clat (usec): min=1497, max=4223.6k, avg=1526622.02, stdev=882519.81 00:19:21.290 lat (msec): min=692, max=4250, avg=1542.18, stdev=885.03 00:19:21.290 clat percentiles (msec): 00:19:21.290 | 1.00th=[ 718], 5.00th=[ 785], 10.00th=[ 793], 20.00th=[ 810], 00:19:21.290 | 30.00th=[ 953], 40.00th=[ 1036], 50.00th=[ 1116], 60.00th=[ 1234], 00:19:21.290 | 70.00th=[ 1552], 80.00th=[ 2869], 90.00th=[ 3138], 95.00th=[ 3205], 00:19:21.291 | 99.00th=[ 3272], 99.50th=[ 3306], 99.90th=[ 4212], 99.95th=[ 4212], 00:19:21.291 | 99.99th=[ 4212] 00:19:21.291 bw ( KiB/s): min=14336, max=161792, per=3.74%, avg=113664.00, stdev=49201.75, samples=10 00:19:21.291 iops : min= 14, max= 158, avg=111.00, stdev=48.05, samples=10 00:19:21.291 lat (msec) : 2=0.15%, 750=2.49%, 1000=28.11%, 2000=45.97%, >=2000=23.28% 00:19:21.291 cpu : usr=0.04%, sys=1.20%, ctx=901, majf=0, minf=32769 00:19:21.291 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.7%, >=64=90.8% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.291 issued rwts: total=683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493184: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=7, BW=7615KiB/s (7798kB/s)(80.0MiB/10758msec) 00:19:21.291 slat (usec): min=1711, max=2123.0k, avg=133506.58, stdev=454555.20 00:19:21.291 clat (msec): min=76, max=10668, avg=6228.19, stdev=3210.43 00:19:21.291 lat (msec): min=2123, max=10757, avg=6361.70, stdev=3173.23 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 77], 5.00th=[ 2140], 10.00th=[ 3675], 20.00th=[ 3775], 00:19:21.291 | 30.00th=[ 3910], 40.00th=[ 4010], 50.00th=[ 4144], 60.00th=[ 4329], 00:19:21.291 | 70.00th=[10268], 80.00th=[10402], 90.00th=[10537], 95.00th=[10537], 00:19:21.291 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:21.291 | 99.99th=[10671] 00:19:21.291 lat (msec) : 100=1.25%, >=2000=98.75% 00:19:21.291 cpu : usr=0.00%, sys=0.34%, ctx=228, majf=0, minf=20481 00:19:21.291 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:21.291 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493185: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=40, BW=40.2MiB/s (42.1MB/s)(433MiB/10772msec) 00:19:21.291 slat (usec): min=42, max=2059.9k, avg=24704.78, stdev=195935.66 00:19:21.291 clat (msec): min=71, max=5250, avg=2175.08, stdev=2087.49 00:19:21.291 lat (msec): min=518, max=5250, avg=2199.79, stdev=2090.83 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 518], 5.00th=[ 518], 10.00th=[ 523], 20.00th=[ 523], 00:19:21.291 | 30.00th=[ 527], 40.00th=[ 550], 50.00th=[ 558], 60.00th=[ 667], 00:19:21.291 | 70.00th=[ 4732], 80.00th=[ 4933], 90.00th=[ 5067], 95.00th=[ 5201], 00:19:21.291 | 99.00th=[ 5269], 99.50th=[ 5269], 99.90th=[ 5269], 99.95th=[ 5269], 00:19:21.291 | 99.99th=[ 5269] 00:19:21.291 bw ( KiB/s): min= 2043, max=253952, per=3.43%, avg=104105.83, stdev=120873.91, samples=6 00:19:21.291 iops : min= 1, max= 248, avg=101.50, stdev=118.21, samples=6 00:19:21.291 lat (msec) : 100=0.23%, 750=60.28%, 1000=0.23%, 2000=0.23%, >=2000=39.03% 00:19:21.291 cpu : usr=0.03%, sys=1.13%, ctx=390, majf=0, minf=32769 00:19:21.291 IO depths : 1=0.2%, 2=0.5%, 4=0.9%, 8=1.8%, 16=3.7%, 32=7.4%, >=64=85.5% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.291 issued rwts: total=433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493186: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=33, BW=33.4MiB/s (35.0MB/s)(356MiB/10672msec) 00:19:21.291 slat (usec): min=34, max=2059.1k, avg=29965.13, stdev=193184.63 00:19:21.291 clat (usec): min=1898, max=10636k, avg=3492460.99, stdev=3101134.74 00:19:21.291 lat (msec): min=649, max=10671, avg=3522.43, stdev=3105.81 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 651], 5.00th=[ 651], 10.00th=[ 659], 20.00th=[ 659], 00:19:21.291 | 30.00th=[ 768], 40.00th=[ 902], 50.00th=[ 2022], 60.00th=[ 4279], 00:19:21.291 | 70.00th=[ 4933], 80.00th=[ 6812], 90.00th=[ 8658], 95.00th=[ 8658], 00:19:21.291 | 99.00th=[ 8792], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:21.291 | 99.99th=[10671] 00:19:21.291 bw ( KiB/s): min=10260, max=200303, per=2.20%, avg=66651.86, stdev=62721.36, samples=7 00:19:21.291 iops : min= 10, max= 195, avg=65.00, stdev=61.04, samples=7 00:19:21.291 lat (msec) : 2=0.28%, 750=28.37%, 1000=14.33%, 2000=5.90%, >=2000=51.12% 00:19:21.291 cpu : usr=0.01%, sys=0.79%, ctx=552, majf=0, minf=32769 00:19:21.291 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.3% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:21.291 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493187: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=42, BW=42.5MiB/s (44.5MB/s)(458MiB/10786msec) 00:19:21.291 slat (usec): min=406, max=2096.7k, avg=23372.47, stdev=167624.06 00:19:21.291 clat (msec): min=77, max=7557, avg=2823.33, stdev=2562.13 00:19:21.291 lat (msec): min=1000, max=7561, avg=2846.70, stdev=2564.26 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 1003], 5.00th=[ 1028], 10.00th=[ 1053], 20.00th=[ 1099], 00:19:21.291 | 30.00th=[ 1150], 40.00th=[ 1267], 50.00th=[ 1385], 60.00th=[ 1418], 00:19:21.291 | 70.00th=[ 1469], 80.00th=[ 6745], 90.00th=[ 7080], 95.00th=[ 7349], 00:19:21.291 | 99.00th=[ 7483], 99.50th=[ 7550], 99.90th=[ 7550], 99.95th=[ 7550], 00:19:21.291 | 99.99th=[ 7550] 00:19:21.291 bw ( KiB/s): min= 2043, max=145408, per=2.23%, avg=67583.50, stdev=52632.48, samples=10 00:19:21.291 iops : min= 1, max= 142, avg=65.90, stdev=51.54, samples=10 00:19:21.291 lat (msec) : 100=0.22%, 1000=1.53%, 2000=69.43%, >=2000=28.82% 00:19:21.291 cpu : usr=0.03%, sys=1.01%, ctx=1169, majf=0, minf=32769 00:19:21.291 IO depths : 1=0.2%, 2=0.4%, 4=0.9%, 8=1.7%, 16=3.5%, 32=7.0%, >=64=86.2% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.3% 00:19:21.291 issued rwts: total=458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493188: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=5, BW=5487KiB/s (5619kB/s)(58.0MiB/10824msec) 00:19:21.291 slat (usec): min=739, max=2110.0k, avg=185295.50, stdev=584167.10 00:19:21.291 clat (msec): min=76, max=10821, avg=9734.22, stdev=2431.01 00:19:21.291 lat (msec): min=2116, max=10823, avg=9919.51, stdev=2063.79 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 77], 5.00th=[ 2165], 10.00th=[ 6477], 20.00th=[ 8658], 00:19:21.291 | 30.00th=[10671], 40.00th=[10805], 50.00th=[10805], 60.00th=[10805], 00:19:21.291 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:21.291 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:21.291 | 99.99th=[10805] 00:19:21.291 lat (msec) : 100=1.72%, >=2000=98.28% 00:19:21.291 cpu : usr=0.00%, sys=0.46%, ctx=113, majf=0, minf=14849 00:19:21.291 IO depths : 1=1.7%, 2=3.4%, 4=6.9%, 8=13.8%, 16=27.6%, 32=46.6%, >=64=0.0% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:21.291 issued rwts: total=58,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493189: Mon Jul 15 23:45:08 2024 00:19:21.291 read: IOPS=21, BW=21.7MiB/s (22.8MB/s)(233MiB/10727msec) 00:19:21.291 slat (usec): min=501, max=2118.2k, avg=45705.47, stdev=255245.05 00:19:21.291 clat (msec): min=75, max=9397, avg=5418.05, stdev=3419.33 00:19:21.291 lat (msec): min=1521, max=9402, avg=5463.76, stdev=3405.39 00:19:21.291 clat percentiles (msec): 00:19:21.291 | 1.00th=[ 1519], 5.00th=[ 1552], 10.00th=[ 1552], 20.00th=[ 1586], 00:19:21.291 | 30.00th=[ 1620], 40.00th=[ 2140], 50.00th=[ 6477], 60.00th=[ 8288], 00:19:21.291 | 70.00th=[ 8490], 80.00th=[ 8792], 90.00th=[ 9060], 95.00th=[ 9329], 00:19:21.291 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9463], 99.95th=[ 9463], 00:19:21.291 | 99.99th=[ 9463] 00:19:21.291 bw ( KiB/s): min= 2048, max=94208, per=1.18%, avg=35840.00, stdev=36423.32, samples=6 00:19:21.291 iops : min= 2, max= 92, avg=35.00, stdev=35.57, samples=6 00:19:21.291 lat (msec) : 100=0.43%, 2000=39.06%, >=2000=60.52% 00:19:21.291 cpu : usr=0.03%, sys=1.11%, ctx=448, majf=0, minf=32769 00:19:21.291 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.9%, 32=13.7%, >=64=73.0% 00:19:21.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.291 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:19:21.291 issued rwts: total=233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.291 job5: (groupid=0, jobs=1): err= 0: pid=1493190: Mon Jul 15 23:45:08 2024 00:19:21.292 read: IOPS=124, BW=124MiB/s (130MB/s)(1250MiB/10048msec) 00:19:21.292 slat (usec): min=34, max=2066.2k, avg=7997.36, stdev=75593.30 00:19:21.292 clat (msec): min=45, max=6717, avg=936.01, stdev=1305.62 00:19:21.292 lat (msec): min=48, max=6725, avg=944.01, stdev=1314.44 00:19:21.292 clat percentiles (msec): 00:19:21.292 | 1.00th=[ 82], 5.00th=[ 228], 10.00th=[ 363], 20.00th=[ 376], 00:19:21.292 | 30.00th=[ 388], 40.00th=[ 393], 50.00th=[ 393], 60.00th=[ 418], 00:19:21.292 | 70.00th=[ 527], 80.00th=[ 600], 90.00th=[ 2937], 95.00th=[ 4665], 00:19:21.292 | 99.00th=[ 5000], 99.50th=[ 5067], 99.90th=[ 6678], 99.95th=[ 6745], 00:19:21.292 | 99.99th=[ 6745] 00:19:21.292 bw ( KiB/s): min=12288, max=342016, per=6.31%, avg=191611.25, stdev=135794.42, samples=12 00:19:21.292 iops : min= 12, max= 334, avg=187.08, stdev=132.59, samples=12 00:19:21.292 lat (msec) : 50=0.24%, 100=1.44%, 250=3.92%, 500=61.84%, 750=13.92% 00:19:21.292 lat (msec) : 2000=7.84%, >=2000=10.80% 00:19:21.292 cpu : usr=0.05%, sys=1.87%, ctx=1392, majf=0, minf=32769 00:19:21.292 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=95.0% 00:19:21.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.292 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:21.292 issued rwts: total=1250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.292 job5: (groupid=0, jobs=1): err= 0: pid=1493191: Mon Jul 15 23:45:08 2024 00:19:21.292 read: IOPS=49, BW=49.8MiB/s (52.2MB/s)(538MiB/10797msec) 00:19:21.292 slat (usec): min=43, max=2097.5k, avg=19931.12, stdev=162769.74 00:19:21.292 clat (msec): min=71, max=6983, avg=2437.14, stdev=2474.16 00:19:21.292 lat (msec): min=531, max=6985, avg=2457.07, stdev=2476.69 00:19:21.292 clat percentiles (msec): 00:19:21.292 | 1.00th=[ 531], 5.00th=[ 531], 10.00th=[ 535], 20.00th=[ 542], 00:19:21.292 | 30.00th=[ 550], 40.00th=[ 558], 50.00th=[ 684], 60.00th=[ 2333], 00:19:21.292 | 70.00th=[ 2500], 80.00th=[ 6544], 90.00th=[ 6745], 95.00th=[ 6879], 00:19:21.292 | 99.00th=[ 6946], 99.50th=[ 6946], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:21.292 | 99.99th=[ 7013] 00:19:21.292 bw ( KiB/s): min= 2048, max=233472, per=3.46%, avg=105002.00, stdev=106938.48, samples=8 00:19:21.292 iops : min= 2, max= 228, avg=102.50, stdev=104.40, samples=8 00:19:21.292 lat (msec) : 100=0.19%, 750=50.37%, 1000=2.04%, >=2000=47.40% 00:19:21.292 cpu : usr=0.01%, sys=1.07%, ctx=663, majf=0, minf=32769 00:19:21.292 IO depths : 1=0.2%, 2=0.4%, 4=0.7%, 8=1.5%, 16=3.0%, 32=5.9%, >=64=88.3% 00:19:21.292 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.292 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:21.292 issued rwts: total=538,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.292 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:21.292 00:19:21.292 Run status group 0 (all jobs): 00:19:21.292 READ: bw=2965MiB/s (3109MB/s), 1272KiB/s-165MiB/s (1303kB/s-173MB/s), io=37.4GiB (40.2GB), run=10015-12924msec 00:19:21.292 00:19:21.292 Disk stats (read/write): 00:19:21.292 nvme0n1: ios=15704/0, merge=0/0, ticks=4420442/0, in_queue=4420442, util=98.87% 00:19:21.292 nvme1n1: ios=52743/0, merge=0/0, ticks=6559173/0, in_queue=6559173, util=99.01% 00:19:21.292 nvme2n1: ios=47982/0, merge=0/0, ticks=7304159/0, in_queue=7304159, util=98.96% 00:19:21.292 nvme3n1: ios=57475/0, merge=0/0, ticks=6964716/0, in_queue=6964716, util=99.17% 00:19:21.292 nvme4n1: ios=69120/0, merge=0/0, ticks=5935567/0, in_queue=5935567, util=98.92% 00:19:21.292 nvme5n1: ios=62746/0, merge=0/0, ticks=6622587/0, in_queue=6622587, util=99.18% 00:19:21.292 23:45:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:19:21.292 23:45:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:21.292 23:45:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:21.292 23:45:08 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:21.292 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000000 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000000 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:21.292 23:45:09 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:21.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:21.857 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:21.857 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:21.857 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:21.857 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000001 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000001 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:22.114 23:45:10 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:23.046 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000002 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000002 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:23.046 23:45:11 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:23.977 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000003 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000003 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:23.977 23:45:12 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:24.908 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000004 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000004 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:24.908 23:45:13 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:25.837 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1213 -- # local i=0 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # lsblk -o NAME,SERIAL 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1214 -- # grep -q -w SPDK00000000000005 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # lsblk -l -o NAME,SERIAL 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1221 -- # grep -q -w SPDK00000000000005 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1225 -- # return 0 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:25.837 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:25.837 rmmod nvme_rdma 00:19:26.094 rmmod nvme_fabrics 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 1491671 ']' 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 1491671 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@942 -- # '[' -z 1491671 ']' 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@946 -- # kill -0 1491671 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@947 -- # uname 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1491671 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1491671' 00:19:26.094 killing process with pid 1491671 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@961 -- # kill 1491671 00:19:26.094 23:45:14 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@966 -- # wait 1491671 00:19:26.352 23:45:15 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:26.352 23:45:15 nvmf_rdma.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:26.352 00:19:26.352 real 0m32.986s 00:19:26.352 user 1m56.455s 00:19:26.352 sys 0m13.343s 00:19:26.352 23:45:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:26.352 23:45:15 nvmf_rdma.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:26.352 ************************************ 00:19:26.352 END TEST nvmf_srq_overwhelm 00:19:26.352 ************************************ 00:19:26.352 23:45:15 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:19:26.352 23:45:15 nvmf_rdma -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:26.352 23:45:15 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:26.352 23:45:15 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:26.352 23:45:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:26.352 ************************************ 00:19:26.352 START TEST nvmf_shutdown 00:19:26.352 ************************************ 00:19:26.352 23:45:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:26.610 * Looking for test storage... 00:19:26.610 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:26.610 ************************************ 00:19:26.610 START TEST nvmf_shutdown_tc1 00:19:26.610 ************************************ 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc1 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:26.610 23:45:15 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.873 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:31.874 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:31.874 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:31.874 Found net devices under 0000:da:00.0: mlx_0_0 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:31.874 Found net devices under 0000:da:00.1: mlx_0_1 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:31.874 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.874 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:31.874 altname enp218s0f0np0 00:19:31.874 altname ens818f0np0 00:19:31.874 inet 192.168.100.8/24 scope global mlx_0_0 00:19:31.874 valid_lft forever preferred_lft forever 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:31.874 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:31.875 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:31.875 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:31.875 altname enp218s0f1np1 00:19:31.875 altname ens818f1np1 00:19:31.875 inet 192.168.100.9/24 scope global mlx_0_1 00:19:31.875 valid_lft forever preferred_lft forever 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:31.875 192.168.100.9' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:31.875 192.168.100.9' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:31.875 192.168.100.9' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1499751 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1499751 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1499751 ']' 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:31.875 23:45:20 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:32.134 [2024-07-15 23:45:20.892060] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:32.134 [2024-07-15 23:45:20.892116] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.134 [2024-07-15 23:45:20.951068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:32.134 [2024-07-15 23:45:21.032844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.134 [2024-07-15 23:45:21.032879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.134 [2024-07-15 23:45:21.032886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.134 [2024-07-15 23:45:21.032893] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.134 [2024-07-15 23:45:21.032898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.134 [2024-07-15 23:45:21.032999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.134 [2024-07-15 23:45:21.035556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:32.134 [2024-07-15 23:45:21.035688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.134 [2024-07-15 23:45:21.035688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.068 [2024-07-15 23:45:21.765425] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1575e10/0x157a300) succeed. 00:19:33.068 [2024-07-15 23:45:21.774560] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1577400/0x15bb990) succeed. 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:33.068 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:33.069 23:45:21 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.069 Malloc1 00:19:33.069 [2024-07-15 23:45:21.986412] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:33.069 Malloc2 00:19:33.327 Malloc3 00:19:33.327 Malloc4 00:19:33.327 Malloc5 00:19:33.327 Malloc6 00:19:33.327 Malloc7 00:19:33.327 Malloc8 00:19:33.585 Malloc9 00:19:33.585 Malloc10 00:19:33.585 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:33.585 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:33.585 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:33.585 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1500052 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1500052 /var/tmp/bdevperf.sock 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@823 -- # '[' -z 1500052 ']' 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 [2024-07-15 23:45:22.461192] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:33.586 [2024-07-15 23:45:22.461240] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:33.586 { 00:19:33.586 "params": { 00:19:33.586 "name": "Nvme$subsystem", 00:19:33.586 "trtype": "$TEST_TRANSPORT", 00:19:33.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:33.586 "adrfam": "ipv4", 00:19:33.586 "trsvcid": "$NVMF_PORT", 00:19:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:33.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:33.586 "hdgst": ${hdgst:-false}, 00:19:33.586 "ddgst": ${ddgst:-false} 00:19:33.586 }, 00:19:33.586 "method": "bdev_nvme_attach_controller" 00:19:33.586 } 00:19:33.586 EOF 00:19:33.586 )") 00:19:33.586 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:33.587 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:33.587 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:33.587 23:45:22 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme1", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme2", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme3", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme4", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme5", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme6", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme7", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme8", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme9", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 },{ 00:19:33.587 "params": { 00:19:33.587 "name": "Nvme10", 00:19:33.587 "trtype": "rdma", 00:19:33.587 "traddr": "192.168.100.8", 00:19:33.587 "adrfam": "ipv4", 00:19:33.587 "trsvcid": "4420", 00:19:33.587 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:33.587 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:33.587 "hdgst": false, 00:19:33.587 "ddgst": false 00:19:33.587 }, 00:19:33.587 "method": "bdev_nvme_attach_controller" 00:19:33.587 }' 00:19:33.587 [2024-07-15 23:45:22.519603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.845 [2024-07-15 23:45:22.593506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.785 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # return 0 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1500052 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:34.786 23:45:23 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:35.801 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1500052 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1499751 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.801 { 00:19:35.801 "params": { 00:19:35.801 "name": "Nvme$subsystem", 00:19:35.801 "trtype": "$TEST_TRANSPORT", 00:19:35.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.801 "adrfam": "ipv4", 00:19:35.801 "trsvcid": "$NVMF_PORT", 00:19:35.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.801 "hdgst": ${hdgst:-false}, 00:19:35.801 "ddgst": ${ddgst:-false} 00:19:35.801 }, 00:19:35.801 "method": "bdev_nvme_attach_controller" 00:19:35.801 } 00:19:35.801 EOF 00:19:35.801 )") 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.801 { 00:19:35.801 "params": { 00:19:35.801 "name": "Nvme$subsystem", 00:19:35.801 "trtype": "$TEST_TRANSPORT", 00:19:35.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.801 "adrfam": "ipv4", 00:19:35.801 "trsvcid": "$NVMF_PORT", 00:19:35.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.801 "hdgst": ${hdgst:-false}, 00:19:35.801 "ddgst": ${ddgst:-false} 00:19:35.801 }, 00:19:35.801 "method": "bdev_nvme_attach_controller" 00:19:35.801 } 00:19:35.801 EOF 00:19:35.801 )") 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.801 { 00:19:35.801 "params": { 00:19:35.801 "name": "Nvme$subsystem", 00:19:35.801 "trtype": "$TEST_TRANSPORT", 00:19:35.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.801 "adrfam": "ipv4", 00:19:35.801 "trsvcid": "$NVMF_PORT", 00:19:35.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.801 "hdgst": ${hdgst:-false}, 00:19:35.801 "ddgst": ${ddgst:-false} 00:19:35.801 }, 00:19:35.801 "method": "bdev_nvme_attach_controller" 00:19:35.801 } 00:19:35.801 EOF 00:19:35.801 )") 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.801 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 [2024-07-15 23:45:24.497340] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:35.802 [2024-07-15 23:45:24.497388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500522 ] 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.802 { 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme$subsystem", 00:19:35.802 "trtype": "$TEST_TRANSPORT", 00:19:35.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "$NVMF_PORT", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.802 "hdgst": ${hdgst:-false}, 00:19:35.802 "ddgst": ${ddgst:-false} 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 } 00:19:35.802 EOF 00:19:35.802 )") 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:35.802 23:45:24 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme1", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme2", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme3", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme4", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme5", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme6", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme7", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.802 "method": "bdev_nvme_attach_controller" 00:19:35.802 },{ 00:19:35.802 "params": { 00:19:35.802 "name": "Nvme8", 00:19:35.802 "trtype": "rdma", 00:19:35.802 "traddr": "192.168.100.8", 00:19:35.802 "adrfam": "ipv4", 00:19:35.802 "trsvcid": "4420", 00:19:35.802 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:35.802 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:35.802 "hdgst": false, 00:19:35.802 "ddgst": false 00:19:35.802 }, 00:19:35.803 "method": "bdev_nvme_attach_controller" 00:19:35.803 },{ 00:19:35.803 "params": { 00:19:35.803 "name": "Nvme9", 00:19:35.803 "trtype": "rdma", 00:19:35.803 "traddr": "192.168.100.8", 00:19:35.803 "adrfam": "ipv4", 00:19:35.803 "trsvcid": "4420", 00:19:35.803 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:35.803 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:35.803 "hdgst": false, 00:19:35.803 "ddgst": false 00:19:35.803 }, 00:19:35.803 "method": "bdev_nvme_attach_controller" 00:19:35.803 },{ 00:19:35.803 "params": { 00:19:35.803 "name": "Nvme10", 00:19:35.803 "trtype": "rdma", 00:19:35.803 "traddr": "192.168.100.8", 00:19:35.803 "adrfam": "ipv4", 00:19:35.803 "trsvcid": "4420", 00:19:35.803 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:35.803 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:35.803 "hdgst": false, 00:19:35.803 "ddgst": false 00:19:35.803 }, 00:19:35.803 "method": "bdev_nvme_attach_controller" 00:19:35.803 }' 00:19:35.803 [2024-07-15 23:45:24.555435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.803 [2024-07-15 23:45:24.629617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.735 Running I/O for 1 seconds... 00:19:38.136 00:19:38.136 Latency(us) 00:19:38.136 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.136 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.136 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme1n1 : 1.16 343.40 21.46 0.00 0.00 181894.54 21470.84 201726.05 00:19:38.137 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme2n1 : 1.17 344.85 21.55 0.00 0.00 178536.54 6054.28 187745.04 00:19:38.137 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme3n1 : 1.17 356.51 22.28 0.00 0.00 171093.97 21595.67 180754.53 00:19:38.137 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme4n1 : 1.17 383.52 23.97 0.00 0.00 157022.49 4525.10 130822.34 00:19:38.137 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme5n1 : 1.18 394.20 24.64 0.00 0.00 151347.97 7271.38 159783.01 00:19:38.137 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme6n1 : 1.17 382.67 23.92 0.00 0.00 154286.85 17476.27 117340.65 00:19:38.137 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme7n1 : 1.18 407.38 25.46 0.00 0.00 142533.84 5835.82 110350.14 00:19:38.137 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme8n1 : 1.18 384.23 24.01 0.00 0.00 148605.76 5679.79 103359.63 00:19:38.137 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme9n1 : 1.17 381.51 23.84 0.00 0.00 148142.99 10111.27 93373.20 00:19:38.137 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.137 Verification LBA range: start 0x0 length 0x400 00:19:38.137 Nvme10n1 : 1.18 380.96 23.81 0.00 0.00 146019.89 11109.91 108852.18 00:19:38.137 =================================================================================================================== 00:19:38.137 Total : 3759.24 234.95 0.00 0.00 157230.83 4525.10 201726.05 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.137 23:45:26 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:38.137 rmmod nvme_rdma 00:19:38.137 rmmod nvme_fabrics 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1499751 ']' 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1499751 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@942 -- # '[' -z 1499751 ']' 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # kill -0 1499751 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # uname 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1499751 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1499751' 00:19:38.137 killing process with pid 1499751 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@961 -- # kill 1499751 00:19:38.137 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # wait 1499751 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:38.703 00:19:38.703 real 0m12.036s 00:19:38.703 user 0m30.220s 00:19:38.703 sys 0m5.042s 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.703 ************************************ 00:19:38.703 END TEST nvmf_shutdown_tc1 00:19:38.703 ************************************ 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:38.703 ************************************ 00:19:38.703 START TEST nvmf_shutdown_tc2 00:19:38.703 ************************************ 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc2 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:38.703 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:38.703 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:38.703 Found net devices under 0000:da:00.0: mlx_0_0 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:38.703 Found net devices under 0000:da:00.1: mlx_0_1 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:38.703 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:38.704 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:38.962 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:38.962 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:38.962 altname enp218s0f0np0 00:19:38.962 altname ens818f0np0 00:19:38.962 inet 192.168.100.8/24 scope global mlx_0_0 00:19:38.962 valid_lft forever preferred_lft forever 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:38.962 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:38.962 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:38.962 altname enp218s0f1np1 00:19:38.962 altname ens818f1np1 00:19:38.962 inet 192.168.100.9/24 scope global mlx_0_1 00:19:38.962 valid_lft forever preferred_lft forever 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:38.962 192.168.100.9' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:38.962 192.168.100.9' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:38.962 192.168.100.9' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1501089 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1501089 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1501089 ']' 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:38.962 23:45:27 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:38.962 [2024-07-15 23:45:27.845470] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:38.962 [2024-07-15 23:45:27.845521] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.962 [2024-07-15 23:45:27.902030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.220 [2024-07-15 23:45:27.978797] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.220 [2024-07-15 23:45:27.978853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.220 [2024-07-15 23:45:27.978860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.220 [2024-07-15 23:45:27.978866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.220 [2024-07-15 23:45:27.978871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.220 [2024-07-15 23:45:27.978978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.220 [2024-07-15 23:45:27.979007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.220 [2024-07-15 23:45:27.979096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.220 [2024-07-15 23:45:27.979097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:39.786 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:39.786 [2024-07-15 23:45:28.703492] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xef2e10/0xef7300) succeed. 00:19:39.786 [2024-07-15 23:45:28.712676] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xef4400/0xf38990) succeed. 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:40.044 23:45:28 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.044 Malloc1 00:19:40.044 [2024-07-15 23:45:28.919766] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:40.044 Malloc2 00:19:40.044 Malloc3 00:19:40.301 Malloc4 00:19:40.301 Malloc5 00:19:40.301 Malloc6 00:19:40.301 Malloc7 00:19:40.301 Malloc8 00:19:40.301 Malloc9 00:19:40.560 Malloc10 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1501367 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1501367 /var/tmp/bdevperf.sock 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1501367 ']' 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:40.560 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 [2024-07-15 23:45:29.390695] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:40.561 [2024-07-15 23:45:29.390744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501367 ] 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.561 { 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme$subsystem", 00:19:40.561 "trtype": "$TEST_TRANSPORT", 00:19:40.561 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "$NVMF_PORT", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.561 "hdgst": ${hdgst:-false}, 00:19:40.561 "ddgst": ${ddgst:-false} 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 } 00:19:40.561 EOF 00:19:40.561 )") 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:40.561 23:45:29 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.561 "params": { 00:19:40.561 "name": "Nvme1", 00:19:40.561 "trtype": "rdma", 00:19:40.561 "traddr": "192.168.100.8", 00:19:40.561 "adrfam": "ipv4", 00:19:40.561 "trsvcid": "4420", 00:19:40.561 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.561 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.561 "hdgst": false, 00:19:40.561 "ddgst": false 00:19:40.561 }, 00:19:40.561 "method": "bdev_nvme_attach_controller" 00:19:40.561 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme2", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme3", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme4", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme5", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme6", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme7", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme8", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme9", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 },{ 00:19:40.562 "params": { 00:19:40.562 "name": "Nvme10", 00:19:40.562 "trtype": "rdma", 00:19:40.562 "traddr": "192.168.100.8", 00:19:40.562 "adrfam": "ipv4", 00:19:40.562 "trsvcid": "4420", 00:19:40.562 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.562 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.562 "hdgst": false, 00:19:40.562 "ddgst": false 00:19:40.562 }, 00:19:40.562 "method": "bdev_nvme_attach_controller" 00:19:40.562 }' 00:19:40.562 [2024-07-15 23:45:29.448210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.562 [2024-07-15 23:45:29.521737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.496 Running I/O for 10 seconds... 00:19:41.496 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:41.496 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # return 0 00:19:41.496 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:41.496 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:41.496 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=19 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:19:41.754 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:42.012 23:45:30 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=179 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 179 -ge 100 ']' 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1501367 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1501367 ']' 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1501367 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1501367 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1501367' 00:19:42.270 killing process with pid 1501367 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1501367 00:19:42.270 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1501367 00:19:42.270 Received shutdown signal, test time was about 0.850172 seconds 00:19:42.270 00:19:42.270 Latency(us) 00:19:42.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:42.270 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.270 Verification LBA range: start 0x0 length 0x400 00:19:42.270 Nvme1n1 : 0.84 363.91 22.74 0.00 0.00 171859.02 5679.79 236678.58 00:19:42.270 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.270 Verification LBA range: start 0x0 length 0x400 00:19:42.270 Nvme2n1 : 0.84 384.88 24.06 0.00 0.00 159227.48 4962.01 164776.23 00:19:42.270 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.270 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme3n1 : 0.84 381.95 23.87 0.00 0.00 157223.20 8363.64 157785.72 00:19:42.271 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme4n1 : 0.84 381.43 23.84 0.00 0.00 154349.47 8613.30 150795.22 00:19:42.271 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme5n1 : 0.84 380.78 23.80 0.00 0.00 152035.52 9175.04 139810.13 00:19:42.271 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme6n1 : 0.84 380.17 23.76 0.00 0.00 149093.13 9736.78 130822.34 00:19:42.271 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme7n1 : 0.84 379.64 23.73 0.00 0.00 145916.88 10048.85 123831.83 00:19:42.271 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme8n1 : 0.84 379.08 23.69 0.00 0.00 143154.81 10423.34 115343.36 00:19:42.271 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme9n1 : 0.85 378.44 23.65 0.00 0.00 140693.11 10985.08 104358.28 00:19:42.271 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:42.271 Verification LBA range: start 0x0 length 0x400 00:19:42.271 Nvme10n1 : 0.85 301.35 18.83 0.00 0.00 173131.92 2964.72 240673.16 00:19:42.271 =================================================================================================================== 00:19:42.271 Total : 3711.65 231.98 0.00 0.00 154204.78 2964.72 240673.16 00:19:42.837 23:45:31 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1501089 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:43.771 rmmod nvme_rdma 00:19:43.771 rmmod nvme_fabrics 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1501089 ']' 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1501089 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@942 -- # '[' -z 1501089 ']' 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # kill -0 1501089 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # uname 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:43.771 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1501089 00:19:43.772 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:43.772 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:43.772 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1501089' 00:19:43.772 killing process with pid 1501089 00:19:43.772 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@961 -- # kill 1501089 00:19:43.772 23:45:32 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # wait 1501089 00:19:44.340 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:44.340 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:44.340 00:19:44.340 real 0m5.507s 00:19:44.340 user 0m22.332s 00:19:44.340 sys 0m1.017s 00:19:44.340 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:44.340 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:44.340 ************************************ 00:19:44.340 END TEST nvmf_shutdown_tc2 00:19:44.340 ************************************ 00:19:44.340 23:45:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:44.341 ************************************ 00:19:44.341 START TEST nvmf_shutdown_tc3 00:19:44.341 ************************************ 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1117 -- # nvmf_shutdown_tc3 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:44.341 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:44.341 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:44.341 Found net devices under 0000:da:00.0: mlx_0_0 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.341 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:44.341 Found net devices under 0000:da:00.1: mlx_0_1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:44.342 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:44.342 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:44.342 altname enp218s0f0np0 00:19:44.342 altname ens818f0np0 00:19:44.342 inet 192.168.100.8/24 scope global mlx_0_0 00:19:44.342 valid_lft forever preferred_lft forever 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:44.342 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:44.342 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:44.342 altname enp218s0f1np1 00:19:44.342 altname ens818f1np1 00:19:44.342 inet 192.168.100.9/24 scope global mlx_0_1 00:19:44.342 valid_lft forever preferred_lft forever 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:44.342 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:44.601 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:44.602 192.168.100.9' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:44.602 192.168.100.9' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:44.602 192.168.100.9' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1502171 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1502171 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1502171 ']' 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:44.602 23:45:33 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:44.602 [2024-07-15 23:45:33.439367] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:44.602 [2024-07-15 23:45:33.439414] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.602 [2024-07-15 23:45:33.495748] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.602 [2024-07-15 23:45:33.578362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.602 [2024-07-15 23:45:33.578396] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.602 [2024-07-15 23:45:33.578402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.602 [2024-07-15 23:45:33.578408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.602 [2024-07-15 23:45:33.578413] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.602 [2024-07-15 23:45:33.578451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.602 [2024-07-15 23:45:33.578478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.602 [2024-07-15 23:45:33.578585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.602 [2024-07-15 23:45:33.578586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 [2024-07-15 23:45:34.309772] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a9ee10/0x1aa3300) succeed. 00:19:45.537 [2024-07-15 23:45:34.320231] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1aa0400/0x1ae4990) succeed. 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:45.537 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:45.537 Malloc1 00:19:45.795 [2024-07-15 23:45:34.527467] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:45.795 Malloc2 00:19:45.795 Malloc3 00:19:45.795 Malloc4 00:19:45.795 Malloc5 00:19:45.795 Malloc6 00:19:45.795 Malloc7 00:19:46.054 Malloc8 00:19:46.054 Malloc9 00:19:46.054 Malloc10 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1502456 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1502456 /var/tmp/bdevperf.sock 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1502456 ']' 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 [2024-07-15 23:45:34.999339] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:46.054 [2024-07-15 23:45:34.999384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1502456 ] 00:19:46.054 23:45:34 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.054 { 00:19:46.054 "params": { 00:19:46.054 "name": "Nvme$subsystem", 00:19:46.054 "trtype": "$TEST_TRANSPORT", 00:19:46.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.054 "adrfam": "ipv4", 00:19:46.054 "trsvcid": "$NVMF_PORT", 00:19:46.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.054 "hdgst": ${hdgst:-false}, 00:19:46.054 "ddgst": ${ddgst:-false} 00:19:46.054 }, 00:19:46.054 "method": "bdev_nvme_attach_controller" 00:19:46.054 } 00:19:46.054 EOF 00:19:46.054 )") 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.054 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.055 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.055 { 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme$subsystem", 00:19:46.055 "trtype": "$TEST_TRANSPORT", 00:19:46.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "$NVMF_PORT", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.055 "hdgst": ${hdgst:-false}, 00:19:46.055 "ddgst": ${ddgst:-false} 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 } 00:19:46.055 EOF 00:19:46.055 )") 00:19:46.055 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:46.055 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:46.055 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:46.055 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme1", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme2", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme3", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme4", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme5", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme6", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme7", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme8", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme9", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 },{ 00:19:46.055 "params": { 00:19:46.055 "name": "Nvme10", 00:19:46.055 "trtype": "rdma", 00:19:46.055 "traddr": "192.168.100.8", 00:19:46.055 "adrfam": "ipv4", 00:19:46.055 "trsvcid": "4420", 00:19:46.055 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:46.055 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:46.055 "hdgst": false, 00:19:46.055 "ddgst": false 00:19:46.055 }, 00:19:46.055 "method": "bdev_nvme_attach_controller" 00:19:46.055 }' 00:19:46.313 [2024-07-15 23:45:35.054692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.313 [2024-07-15 23:45:35.127935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.247 Running I/O for 10 seconds... 00:19:47.247 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:47.247 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # return 0 00:19:47.247 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:47.247 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:47.247 23:45:35 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:47.247 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:47.505 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:47.505 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=19 00:19:47.505 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 19 -ge 100 ']' 00:19:47.505 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=171 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 171 -ge 100 ']' 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1502171 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@942 -- # '[' -z 1502171 ']' 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # kill -0 1502171 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # uname 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:47.763 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1502171 00:19:48.032 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:19:48.032 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:19:48.032 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1502171' 00:19:48.032 killing process with pid 1502171 00:19:48.032 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@961 -- # kill 1502171 00:19:48.032 23:45:36 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # wait 1502171 00:19:48.032 [2024-07-15 23:45:36.798678] rdma.c: 864:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 9 00:19:48.291 23:45:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:48.291 23:45:37 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:48.857 [2024-07-15 23:45:37.820189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.857 [2024-07-15 23:45:37.820227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:19:48.857 [2024-07-15 23:45:37.820238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.857 [2024-07-15 23:45:37.820244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:19:48.857 [2024-07-15 23:45:37.820251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.857 [2024-07-15 23:45:37.820257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:19:48.857 [2024-07-15 23:45:37.820263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.857 [2024-07-15 23:45:37.820269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:8192 cdw0:0 sqhd:f200 p:0 m:0 dnr:0 00:19:48.858 [2024-07-15 23:45:37.822070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.822109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:48.858 [2024-07-15 23:45:37.822153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.822177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.822202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.822223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.822253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.822273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.822295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.822316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.824645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.824678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:48.858 [2024-07-15 23:45:37.824717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.824740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.824763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.824783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.824805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.824826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.824847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.824868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.827353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.827383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:48.858 [2024-07-15 23:45:37.827420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.827441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.827464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.827485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.827507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.827528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.827560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.827581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.829991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.830027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:48.858 [2024-07-15 23:45:37.830064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.830086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.830109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.830129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.830151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.830171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.830193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.830213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.832414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.832444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:48.858 [2024-07-15 23:45:37.832481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.832503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.832525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.832558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.832581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.832601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.832623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.832643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.834842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.834852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:48.858 [2024-07-15 23:45:37.834865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.834872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.834879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.834885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.834892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.834901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.834907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.834913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.836959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:48.858 [2024-07-15 23:45:37.836971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:48.858 [2024-07-15 23:45:37.836986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.836996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.837005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.837013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.837022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.837030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:48.858 [2024-07-15 23:45:37.837039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:48.858 [2024-07-15 23:45:37.837047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.839560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:49.122 [2024-07-15 23:45:37.839590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:49.122 [2024-07-15 23:45:37.839626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.839649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.839671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.839691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.839713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.839733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.839755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.842172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:49.122 [2024-07-15 23:45:37.842202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:49.122 [2024-07-15 23:45:37.842239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.842267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.842290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.842310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.842332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.842353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.842375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:49.122 [2024-07-15 23:45:37.842396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:28101 cdw0:0 sqhd:c200 p:1 m:1 dnr:0 00:19:49.122 [2024-07-15 23:45:37.844669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:49.122 [2024-07-15 23:45:37.844699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:49.122 [2024-07-15 23:45:37.846866] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.846900] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.849150] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.849182] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.851701] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.851717] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.853995] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.854026] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.856258] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.856290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.858245] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.858276] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.860309] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:19:49.122 [2024-07-15 23:45:37.860326] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.122 [2024-07-15 23:45:37.860393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.122 [2024-07-15 23:45:37.860756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183500 00:19:49.122 [2024-07-15 23:45:37.860766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183500 00:19:49.123 [2024-07-15 23:45:37.860792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183500 00:19:49.123 [2024-07-15 23:45:37.860819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183500 00:19:49.123 [2024-07-15 23:45:37.860845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183500 00:19:49.123 [2024-07-15 23:45:37.860871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183500 00:19:49.123 [2024-07-15 23:45:37.860897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.860923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.860949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.860976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.860992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183f00 00:19:49.123 [2024-07-15 23:45:37.861270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.861286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183a00 00:19:49.123 [2024-07-15 23:45:37.861296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863257] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:19:49.123 [2024-07-15 23:45:37.863290] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.123 [2024-07-15 23:45:37.863433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.863980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.863991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.864009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.864020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.864035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.864045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.864061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183400 00:19:49.123 [2024-07-15 23:45:37.864072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.864087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x184300 00:19:49.123 [2024-07-15 23:45:37.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.123 [2024-07-15 23:45:37.864113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x184300 00:19:49.124 [2024-07-15 23:45:37.864905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x184100 00:19:49.124 [2024-07-15 23:45:37.864933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183f00 00:19:49.124 [2024-07-15 23:45:37.864959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.864976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012360000 len:0x10000 key:0x184400 00:19:49.124 [2024-07-15 23:45:37.864986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.124 [2024-07-15 23:45:37.865005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001250d000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001252e000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.865391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001254f000 len:0x10000 key:0x184400 00:19:49.125 [2024-07-15 23:45:37.865402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868499] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:19:49.125 [2024-07-15 23:45:37.868535] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.125 [2024-07-15 23:45:37.868577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.868992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x183c00 00:19:49.125 [2024-07-15 23:45:37.869365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.125 [2024-07-15 23:45:37.869381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183c00 00:19:49.126 [2024-07-15 23:45:37.869555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.869988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.869999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.870014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.870024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.870040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184500 00:19:49.126 [2024-07-15 23:45:37.870051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.126 [2024-07-15 23:45:37.870067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x184100 00:19:49.126 [2024-07-15 23:45:37.870077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012780000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127a1000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127c2000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000127e3000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012804000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012825000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012846000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012867000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012888000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128a9000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128ca000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000128eb000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001290c000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001292d000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001294e000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001296f000 len:0x10000 key:0x184400 00:19:49.127 [2024-07-15 23:45:37.870523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:abf83000 sqhd:52b0 p:0 m:0 dnr:0 00:19:49.127 [2024-07-15 23:45:37.890651] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:19:49.127 [2024-07-15 23:45:37.890669] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890719] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890730] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890739] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890748] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890765] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890774] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890783] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890791] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.890800] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:49.127 [2024-07-15 23:45:37.891379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:49.127 [2024-07-15 23:45:37.891891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:49.127 task offset: 37888 on job bdev=Nvme1n1 fails 00:19:49.127 00:19:49.127 Latency(us) 00:19:49.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.127 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme1n1 ended in about 1.92 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme1n1 : 1.92 141.46 8.84 33.28 0.00 362382.69 5679.79 1046578.71 00:19:49.127 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme2n1 ended in about 1.92 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme2n1 : 1.92 141.36 8.83 33.26 0.00 359216.97 9112.62 1046578.71 00:19:49.127 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme3n1 ended in about 1.93 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme3n1 : 1.93 149.58 9.35 33.24 0.00 340073.41 11609.23 1038589.56 00:19:49.127 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme4n1 ended in about 1.93 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme4n1 : 1.93 150.52 9.41 33.22 0.00 335478.38 20222.54 1038589.56 00:19:49.127 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme5n1 ended in about 1.93 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme5n1 : 1.93 141.09 8.82 33.20 0.00 350656.80 28336.52 1038589.56 00:19:49.127 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme6n1 ended in about 1.93 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme6n1 : 1.93 141.01 8.81 33.18 0.00 347717.15 31956.60 1038589.56 00:19:49.127 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme7n1 ended in about 1.93 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme7n1 : 1.93 149.21 9.33 33.16 0.00 329219.10 37698.80 1030600.41 00:19:49.127 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme8n1 ended in about 1.91 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme8n1 : 1.91 149.47 9.34 33.45 0.00 323206.08 44938.97 1078535.31 00:19:49.127 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme9n1 ended in about 1.88 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme9n1 : 1.88 135.99 8.50 34.00 0.00 349036.93 42192.70 1070546.16 00:19:49.127 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:49.127 Job: Nvme10n1 ended in about 1.89 seconds with error 00:19:49.127 Verification LBA range: start 0x0 length 0x400 00:19:49.127 Nvme10n1 : 1.89 135.63 8.48 33.91 0.00 346672.96 26464.06 1062557.01 00:19:49.127 =================================================================================================================== 00:19:49.127 Total : 1435.31 89.71 333.89 0.00 344106.94 5679.79 1078535.31 00:19:49.127 [2024-07-15 23:45:37.936250] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:49.127 [2024-07-15 23:45:37.937423] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.127 [2024-07-15 23:45:37.937469] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.127 [2024-07-15 23:45:37.937487] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:49.128 [2024-07-15 23:45:37.937600] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.937626] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.937643] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:19:49.128 [2024-07-15 23:45:37.937761] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.937785] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.937802] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:19:49.128 [2024-07-15 23:45:37.937916] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.937940] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.937958] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:19:49.128 [2024-07-15 23:45:37.938153] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938180] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:19:49.128 [2024-07-15 23:45:37.938312] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938320] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938325] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:19:49.128 [2024-07-15 23:45:37.938406] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938414] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938419] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:19:49.128 [2024-07-15 23:45:37.938490] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938498] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938503] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:19:49.128 [2024-07-15 23:45:37.938608] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938617] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938621] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:19:49.128 [2024-07-15 23:45:37.938704] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:49.128 [2024-07-15 23:45:37.938715] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:49.128 [2024-07-15 23:45:37.938720] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1502456 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:49.386 rmmod nvme_rdma 00:19:49.386 rmmod nvme_fabrics 00:19:49.386 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 1502456 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:49.386 00:19:49.386 real 0m5.168s 00:19:49.386 user 0m17.760s 00:19:49.386 sys 0m1.075s 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:49.386 23:45:38 nvmf_rdma.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:49.387 ************************************ 00:19:49.387 END TEST nvmf_shutdown_tc3 00:19:49.387 ************************************ 00:19:49.387 23:45:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1136 -- # return 0 00:19:49.387 23:45:38 nvmf_rdma.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:49.387 00:19:49.387 real 0m23.034s 00:19:49.387 user 1m10.453s 00:19:49.387 sys 0m7.339s 00:19:49.387 23:45:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:49.387 23:45:38 nvmf_rdma.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:49.387 ************************************ 00:19:49.387 END TEST nvmf_shutdown 00:19:49.387 ************************************ 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:19:49.645 23:45:38 nvmf_rdma -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 23:45:38 nvmf_rdma -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 23:45:38 nvmf_rdma -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:49.645 23:45:38 nvmf_rdma -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:49.645 23:45:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 ************************************ 00:19:49.645 START TEST nvmf_multicontroller 00:19:49.645 ************************************ 00:19:49.645 23:45:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:49.645 * Looking for test storage... 00:19:49.645 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:49.645 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.645 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:49.645 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:49.646 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:19:49.646 00:19:49.646 real 0m0.123s 00:19:49.646 user 0m0.067s 00:19:49.646 sys 0m0.065s 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:49.646 23:45:38 nvmf_rdma.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:49.646 ************************************ 00:19:49.646 END TEST nvmf_multicontroller 00:19:49.646 ************************************ 00:19:49.904 23:45:38 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:19:49.904 23:45:38 nvmf_rdma -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:49.904 23:45:38 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:49.904 23:45:38 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:49.904 23:45:38 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:49.904 ************************************ 00:19:49.904 START TEST nvmf_aer 00:19:49.904 ************************************ 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:49.904 * Looking for test storage... 00:19:49.904 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:49.904 23:45:38 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:55.168 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.168 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.168 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:55.169 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:55.169 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:55.169 Found net devices under 0000:da:00.0: mlx_0_0 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:55.169 Found net devices under 0000:da:00.1: mlx_0_1 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.169 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:55.170 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.170 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:19:55.170 altname enp218s0f0np0 00:19:55.170 altname ens818f0np0 00:19:55.170 inet 192.168.100.8/24 scope global mlx_0_0 00:19:55.170 valid_lft forever preferred_lft forever 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:55.170 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:55.170 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:19:55.170 altname enp218s0f1np1 00:19:55.170 altname ens818f1np1 00:19:55.170 inet 192.168.100.9/24 scope global mlx_0_1 00:19:55.170 valid_lft forever preferred_lft forever 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:55.170 192.168.100.9' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:55.170 192.168.100.9' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:55.170 192.168.100.9' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1506305 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1506305 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@823 -- # '[' -z 1506305 ']' 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@828 -- # local max_retries=100 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@832 -- # xtrace_disable 00:19:55.170 23:45:43 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:55.170 [2024-07-15 23:45:43.998056] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:19:55.170 [2024-07-15 23:45:43.998104] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.170 [2024-07-15 23:45:44.053866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.170 [2024-07-15 23:45:44.126420] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.170 [2024-07-15 23:45:44.126459] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.170 [2024-07-15 23:45:44.126465] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.170 [2024-07-15 23:45:44.126470] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.170 [2024-07-15 23:45:44.126475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.170 [2024-07-15 23:45:44.126600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.170 [2024-07-15 23:45:44.126662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.170 [2024-07-15 23:45:44.126633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.170 [2024-07-15 23:45:44.126661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@856 -- # return 0 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 [2024-07-15 23:45:44.858863] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x163ecc0/0x16431b0) succeed. 00:19:56.104 [2024-07-15 23:45:44.867946] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1640300/0x1684840) succeed. 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.104 23:45:44 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 Malloc0 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.104 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 [2024-07-15 23:45:45.032513] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.105 [ 00:19:56.105 { 00:19:56.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:56.105 "subtype": "Discovery", 00:19:56.105 "listen_addresses": [], 00:19:56.105 "allow_any_host": true, 00:19:56.105 "hosts": [] 00:19:56.105 }, 00:19:56.105 { 00:19:56.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.105 "subtype": "NVMe", 00:19:56.105 "listen_addresses": [ 00:19:56.105 { 00:19:56.105 "trtype": "RDMA", 00:19:56.105 "adrfam": "IPv4", 00:19:56.105 "traddr": "192.168.100.8", 00:19:56.105 "trsvcid": "4420" 00:19:56.105 } 00:19:56.105 ], 00:19:56.105 "allow_any_host": true, 00:19:56.105 "hosts": [], 00:19:56.105 "serial_number": "SPDK00000000000001", 00:19:56.105 "model_number": "SPDK bdev Controller", 00:19:56.105 "max_namespaces": 2, 00:19:56.105 "min_cntlid": 1, 00:19:56.105 "max_cntlid": 65519, 00:19:56.105 "namespaces": [ 00:19:56.105 { 00:19:56.105 "nsid": 1, 00:19:56.105 "bdev_name": "Malloc0", 00:19:56.105 "name": "Malloc0", 00:19:56.105 "nguid": "1F7EFE2341484A24908E01E4A4B2C76F", 00:19:56.105 "uuid": "1f7efe23-4148-4a24-908e-01e4a4b2c76f" 00:19:56.105 } 00:19:56.105 ] 00:19:56.105 } 00:19:56.105 ] 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@33 -- # aerpid=1506426 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1259 -- # local i=0 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 0 -lt 200 ']' 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # i=1 00:19:56.105 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1261 -- # '[' 1 -lt 200 ']' 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1262 -- # i=2 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1263 -- # sleep 0.1 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1260 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1270 -- # return 0 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.363 Malloc1 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.363 [ 00:19:56.363 { 00:19:56.363 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:56.363 "subtype": "Discovery", 00:19:56.363 "listen_addresses": [], 00:19:56.363 "allow_any_host": true, 00:19:56.363 "hosts": [] 00:19:56.363 }, 00:19:56.363 { 00:19:56.363 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.363 "subtype": "NVMe", 00:19:56.363 "listen_addresses": [ 00:19:56.363 { 00:19:56.363 "trtype": "RDMA", 00:19:56.363 "adrfam": "IPv4", 00:19:56.363 "traddr": "192.168.100.8", 00:19:56.363 "trsvcid": "4420" 00:19:56.363 } 00:19:56.363 ], 00:19:56.363 "allow_any_host": true, 00:19:56.363 "hosts": [], 00:19:56.363 "serial_number": "SPDK00000000000001", 00:19:56.363 "model_number": "SPDK bdev Controller", 00:19:56.363 "max_namespaces": 2, 00:19:56.363 "min_cntlid": 1, 00:19:56.363 "max_cntlid": 65519, 00:19:56.363 "namespaces": [ 00:19:56.363 { 00:19:56.363 "nsid": 1, 00:19:56.363 "bdev_name": "Malloc0", 00:19:56.363 "name": "Malloc0", 00:19:56.363 "nguid": "1F7EFE2341484A24908E01E4A4B2C76F", 00:19:56.363 "uuid": "1f7efe23-4148-4a24-908e-01e4a4b2c76f" 00:19:56.363 }, 00:19:56.363 { 00:19:56.363 "nsid": 2, 00:19:56.363 "bdev_name": "Malloc1", 00:19:56.363 "name": "Malloc1", 00:19:56.363 "nguid": "39B4A030A90C4AFAA5FCF831CF48D914", 00:19:56.363 "uuid": "39b4a030-a90c-4afa-a5fc-f831cf48d914" 00:19:56.363 } 00:19:56.363 ] 00:19:56.363 } 00:19:56.363 ] 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.363 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@43 -- # wait 1506426 00:19:56.622 Asynchronous Event Request test 00:19:56.622 Attaching to 192.168.100.8 00:19:56.622 Attached to 192.168.100.8 00:19:56.622 Registering asynchronous event callbacks... 00:19:56.622 Starting namespace attribute notice tests for all controllers... 00:19:56.622 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:56.622 aer_cb - Changed Namespace 00:19:56.622 Cleaning up... 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@553 -- # xtrace_disable 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:56.622 rmmod nvme_rdma 00:19:56.622 rmmod nvme_fabrics 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1506305 ']' 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1506305 00:19:56.622 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@942 -- # '[' -z 1506305 ']' 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@946 -- # kill -0 1506305 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@947 -- # uname 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1506305 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1506305' 00:19:56.623 killing process with pid 1506305 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@961 -- # kill 1506305 00:19:56.623 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@966 -- # wait 1506305 00:19:56.882 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.882 23:45:45 nvmf_rdma.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:56.882 00:19:56.882 real 0m7.131s 00:19:56.882 user 0m8.068s 00:19:56.882 sys 0m4.327s 00:19:56.882 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@1118 -- # xtrace_disable 00:19:56.882 23:45:45 nvmf_rdma.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:56.882 ************************************ 00:19:56.882 END TEST nvmf_aer 00:19:56.882 ************************************ 00:19:56.882 23:45:45 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:19:56.882 23:45:45 nvmf_rdma -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:56.882 23:45:45 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:19:56.882 23:45:45 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:19:56.882 23:45:45 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:56.882 ************************************ 00:19:56.882 START TEST nvmf_async_init 00:19:56.882 ************************************ 00:19:56.882 23:45:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:57.140 * Looking for test storage... 00:19:57.140 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:57.140 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@20 -- # nguid=be46c72e96254a48bf4845c82171c242 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:57.141 23:45:45 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:02.408 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:02.408 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:02.408 Found net devices under 0000:da:00.0: mlx_0_0 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:02.408 Found net devices under 0000:da:00.1: mlx_0_1 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:02.408 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:02.409 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.409 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:02.409 altname enp218s0f0np0 00:20:02.409 altname ens818f0np0 00:20:02.409 inet 192.168.100.8/24 scope global mlx_0_0 00:20:02.409 valid_lft forever preferred_lft forever 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:02.409 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:02.409 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:02.409 altname enp218s0f1np1 00:20:02.409 altname ens818f1np1 00:20:02.409 inet 192.168.100.9/24 scope global mlx_0_1 00:20:02.409 valid_lft forever preferred_lft forever 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:02.409 192.168.100.9' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:02.409 192.168.100.9' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:02.409 192.168.100.9' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1509618 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1509618 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@823 -- # '[' -z 1509618 ']' 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:02.409 23:45:51 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:02.667 [2024-07-15 23:45:51.418959] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:02.667 [2024-07-15 23:45:51.419006] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.667 [2024-07-15 23:45:51.474188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.667 [2024-07-15 23:45:51.552716] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.667 [2024-07-15 23:45:51.552751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.667 [2024-07-15 23:45:51.552758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.667 [2024-07-15 23:45:51.552763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.667 [2024-07-15 23:45:51.552769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.667 [2024-07-15 23:45:51.552786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.234 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:03.234 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@856 -- # return 0 00:20:03.234 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.234 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.234 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.493 [2024-07-15 23:45:52.270954] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17e2910/0x17e6e00) succeed. 00:20:03.493 [2024-07-15 23:45:52.279852] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17e3e10/0x1828490) succeed. 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.493 null0 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.493 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g be46c72e96254a48bf4845c82171c242 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 [2024-07-15 23:45:52.355169] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 nvme0n1 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 [ 00:20:03.494 { 00:20:03.494 "name": "nvme0n1", 00:20:03.494 "aliases": [ 00:20:03.494 "be46c72e-9625-4a48-bf48-45c82171c242" 00:20:03.494 ], 00:20:03.494 "product_name": "NVMe disk", 00:20:03.494 "block_size": 512, 00:20:03.494 "num_blocks": 2097152, 00:20:03.494 "uuid": "be46c72e-9625-4a48-bf48-45c82171c242", 00:20:03.494 "assigned_rate_limits": { 00:20:03.494 "rw_ios_per_sec": 0, 00:20:03.494 "rw_mbytes_per_sec": 0, 00:20:03.494 "r_mbytes_per_sec": 0, 00:20:03.494 "w_mbytes_per_sec": 0 00:20:03.494 }, 00:20:03.494 "claimed": false, 00:20:03.494 "zoned": false, 00:20:03.494 "supported_io_types": { 00:20:03.494 "read": true, 00:20:03.494 "write": true, 00:20:03.494 "unmap": false, 00:20:03.494 "flush": true, 00:20:03.494 "reset": true, 00:20:03.494 "nvme_admin": true, 00:20:03.494 "nvme_io": true, 00:20:03.494 "nvme_io_md": false, 00:20:03.494 "write_zeroes": true, 00:20:03.494 "zcopy": false, 00:20:03.494 "get_zone_info": false, 00:20:03.494 "zone_management": false, 00:20:03.494 "zone_append": false, 00:20:03.494 "compare": true, 00:20:03.494 "compare_and_write": true, 00:20:03.494 "abort": true, 00:20:03.494 "seek_hole": false, 00:20:03.494 "seek_data": false, 00:20:03.494 "copy": true, 00:20:03.494 "nvme_iov_md": false 00:20:03.494 }, 00:20:03.494 "memory_domains": [ 00:20:03.494 { 00:20:03.494 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:03.494 "dma_device_type": 0 00:20:03.494 } 00:20:03.494 ], 00:20:03.494 "driver_specific": { 00:20:03.494 "nvme": [ 00:20:03.494 { 00:20:03.494 "trid": { 00:20:03.494 "trtype": "RDMA", 00:20:03.494 "adrfam": "IPv4", 00:20:03.494 "traddr": "192.168.100.8", 00:20:03.494 "trsvcid": "4420", 00:20:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:03.494 }, 00:20:03.494 "ctrlr_data": { 00:20:03.494 "cntlid": 1, 00:20:03.494 "vendor_id": "0x8086", 00:20:03.494 "model_number": "SPDK bdev Controller", 00:20:03.494 "serial_number": "00000000000000000000", 00:20:03.494 "firmware_revision": "24.09", 00:20:03.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:03.494 "oacs": { 00:20:03.494 "security": 0, 00:20:03.494 "format": 0, 00:20:03.494 "firmware": 0, 00:20:03.494 "ns_manage": 0 00:20:03.494 }, 00:20:03.494 "multi_ctrlr": true, 00:20:03.494 "ana_reporting": false 00:20:03.494 }, 00:20:03.494 "vs": { 00:20:03.494 "nvme_version": "1.3" 00:20:03.494 }, 00:20:03.494 "ns_data": { 00:20:03.494 "id": 1, 00:20:03.494 "can_share": true 00:20:03.494 } 00:20:03.494 } 00:20:03.494 ], 00:20:03.494 "mp_policy": "active_passive" 00:20:03.494 } 00:20:03.494 } 00:20:03.494 ] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.494 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.494 [2024-07-15 23:45:52.460927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:03.752 [2024-07-15 23:45:52.478858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:03.752 [2024-07-15 23:45:52.499971] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:03.752 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.752 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 [ 00:20:03.753 { 00:20:03.753 "name": "nvme0n1", 00:20:03.753 "aliases": [ 00:20:03.753 "be46c72e-9625-4a48-bf48-45c82171c242" 00:20:03.753 ], 00:20:03.753 "product_name": "NVMe disk", 00:20:03.753 "block_size": 512, 00:20:03.753 "num_blocks": 2097152, 00:20:03.753 "uuid": "be46c72e-9625-4a48-bf48-45c82171c242", 00:20:03.753 "assigned_rate_limits": { 00:20:03.753 "rw_ios_per_sec": 0, 00:20:03.753 "rw_mbytes_per_sec": 0, 00:20:03.753 "r_mbytes_per_sec": 0, 00:20:03.753 "w_mbytes_per_sec": 0 00:20:03.753 }, 00:20:03.753 "claimed": false, 00:20:03.753 "zoned": false, 00:20:03.753 "supported_io_types": { 00:20:03.753 "read": true, 00:20:03.753 "write": true, 00:20:03.753 "unmap": false, 00:20:03.753 "flush": true, 00:20:03.753 "reset": true, 00:20:03.753 "nvme_admin": true, 00:20:03.753 "nvme_io": true, 00:20:03.753 "nvme_io_md": false, 00:20:03.753 "write_zeroes": true, 00:20:03.753 "zcopy": false, 00:20:03.753 "get_zone_info": false, 00:20:03.753 "zone_management": false, 00:20:03.753 "zone_append": false, 00:20:03.753 "compare": true, 00:20:03.753 "compare_and_write": true, 00:20:03.753 "abort": true, 00:20:03.753 "seek_hole": false, 00:20:03.753 "seek_data": false, 00:20:03.753 "copy": true, 00:20:03.753 "nvme_iov_md": false 00:20:03.753 }, 00:20:03.753 "memory_domains": [ 00:20:03.753 { 00:20:03.753 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:03.753 "dma_device_type": 0 00:20:03.753 } 00:20:03.753 ], 00:20:03.753 "driver_specific": { 00:20:03.753 "nvme": [ 00:20:03.753 { 00:20:03.753 "trid": { 00:20:03.753 "trtype": "RDMA", 00:20:03.753 "adrfam": "IPv4", 00:20:03.753 "traddr": "192.168.100.8", 00:20:03.753 "trsvcid": "4420", 00:20:03.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:03.753 }, 00:20:03.753 "ctrlr_data": { 00:20:03.753 "cntlid": 2, 00:20:03.753 "vendor_id": "0x8086", 00:20:03.753 "model_number": "SPDK bdev Controller", 00:20:03.753 "serial_number": "00000000000000000000", 00:20:03.753 "firmware_revision": "24.09", 00:20:03.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:03.753 "oacs": { 00:20:03.753 "security": 0, 00:20:03.753 "format": 0, 00:20:03.753 "firmware": 0, 00:20:03.753 "ns_manage": 0 00:20:03.753 }, 00:20:03.753 "multi_ctrlr": true, 00:20:03.753 "ana_reporting": false 00:20:03.753 }, 00:20:03.753 "vs": { 00:20:03.753 "nvme_version": "1.3" 00:20:03.753 }, 00:20:03.753 "ns_data": { 00:20:03.753 "id": 1, 00:20:03.753 "can_share": true 00:20:03.753 } 00:20:03.753 } 00:20:03.753 ], 00:20:03.753 "mp_policy": "active_passive" 00:20:03.753 } 00:20:03.753 } 00:20:03.753 ] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.KrByZJWf1d 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.KrByZJWf1d 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 [2024-07-15 23:45:52.563468] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KrByZJWf1d 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KrByZJWf1d 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 [2024-07-15 23:45:52.579508] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:03.753 nvme0n1 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.753 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.753 [ 00:20:03.753 { 00:20:03.753 "name": "nvme0n1", 00:20:03.753 "aliases": [ 00:20:03.753 "be46c72e-9625-4a48-bf48-45c82171c242" 00:20:03.753 ], 00:20:03.753 "product_name": "NVMe disk", 00:20:03.753 "block_size": 512, 00:20:03.753 "num_blocks": 2097152, 00:20:03.753 "uuid": "be46c72e-9625-4a48-bf48-45c82171c242", 00:20:03.753 "assigned_rate_limits": { 00:20:03.753 "rw_ios_per_sec": 0, 00:20:03.753 "rw_mbytes_per_sec": 0, 00:20:03.753 "r_mbytes_per_sec": 0, 00:20:03.753 "w_mbytes_per_sec": 0 00:20:03.753 }, 00:20:03.753 "claimed": false, 00:20:03.753 "zoned": false, 00:20:03.753 "supported_io_types": { 00:20:03.753 "read": true, 00:20:03.753 "write": true, 00:20:03.753 "unmap": false, 00:20:03.753 "flush": true, 00:20:03.753 "reset": true, 00:20:03.753 "nvme_admin": true, 00:20:03.753 "nvme_io": true, 00:20:03.753 "nvme_io_md": false, 00:20:03.753 "write_zeroes": true, 00:20:03.753 "zcopy": false, 00:20:03.753 "get_zone_info": false, 00:20:03.753 "zone_management": false, 00:20:03.753 "zone_append": false, 00:20:03.753 "compare": true, 00:20:03.753 "compare_and_write": true, 00:20:03.753 "abort": true, 00:20:03.753 "seek_hole": false, 00:20:03.753 "seek_data": false, 00:20:03.753 "copy": true, 00:20:03.753 "nvme_iov_md": false 00:20:03.753 }, 00:20:03.753 "memory_domains": [ 00:20:03.753 { 00:20:03.753 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:20:03.753 "dma_device_type": 0 00:20:03.753 } 00:20:03.753 ], 00:20:03.753 "driver_specific": { 00:20:03.753 "nvme": [ 00:20:03.753 { 00:20:03.753 "trid": { 00:20:03.753 "trtype": "RDMA", 00:20:03.753 "adrfam": "IPv4", 00:20:03.753 "traddr": "192.168.100.8", 00:20:03.753 "trsvcid": "4421", 00:20:03.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:03.753 }, 00:20:03.753 "ctrlr_data": { 00:20:03.753 "cntlid": 3, 00:20:03.753 "vendor_id": "0x8086", 00:20:03.753 "model_number": "SPDK bdev Controller", 00:20:03.753 "serial_number": "00000000000000000000", 00:20:03.753 "firmware_revision": "24.09", 00:20:03.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:03.754 "oacs": { 00:20:03.754 "security": 0, 00:20:03.754 "format": 0, 00:20:03.754 "firmware": 0, 00:20:03.754 "ns_manage": 0 00:20:03.754 }, 00:20:03.754 "multi_ctrlr": true, 00:20:03.754 "ana_reporting": false 00:20:03.754 }, 00:20:03.754 "vs": { 00:20:03.754 "nvme_version": "1.3" 00:20:03.754 }, 00:20:03.754 "ns_data": { 00:20:03.754 "id": 1, 00:20:03.754 "can_share": true 00:20:03.754 } 00:20:03.754 } 00:20:03.754 ], 00:20:03.754 "mp_policy": "active_passive" 00:20:03.754 } 00:20:03.754 } 00:20:03.754 ] 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.KrByZJWf1d 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:03.754 rmmod nvme_rdma 00:20:03.754 rmmod nvme_fabrics 00:20:03.754 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1509618 ']' 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1509618 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@942 -- # '[' -z 1509618 ']' 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@946 -- # kill -0 1509618 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@947 -- # uname 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1509618 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1509618' 00:20:04.011 killing process with pid 1509618 00:20:04.011 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@961 -- # kill 1509618 00:20:04.012 23:45:52 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@966 -- # wait 1509618 00:20:04.271 23:45:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:04.271 23:45:53 nvmf_rdma.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:04.271 00:20:04.271 real 0m7.140s 00:20:04.271 user 0m3.361s 00:20:04.271 sys 0m4.338s 00:20:04.271 23:45:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:04.271 23:45:53 nvmf_rdma.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:04.271 ************************************ 00:20:04.271 END TEST nvmf_async_init 00:20:04.271 ************************************ 00:20:04.271 23:45:53 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:20:04.271 23:45:53 nvmf_rdma -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:04.271 23:45:53 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:04.271 23:45:53 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:04.271 23:45:53 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:04.271 ************************************ 00:20:04.271 START TEST dma 00:20:04.271 ************************************ 00:20:04.271 23:45:53 nvmf_rdma.dma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:20:04.271 * Looking for test storage... 00:20:04.271 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@7 -- # uname -s 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:04.271 23:45:53 nvmf_rdma.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.271 23:45:53 nvmf_rdma.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.271 23:45:53 nvmf_rdma.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.271 23:45:53 nvmf_rdma.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.271 23:45:53 nvmf_rdma.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.271 23:45:53 nvmf_rdma.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.271 23:45:53 nvmf_rdma.dma -- paths/export.sh@5 -- # export PATH 00:20:04.271 23:45:53 nvmf_rdma.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@47 -- # : 0 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@18 -- # subsystem=0 00:20:04.271 23:45:53 nvmf_rdma.dma -- host/dma.sh@93 -- # nvmftestinit 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.271 23:45:53 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:04.271 23:45:53 nvmf_rdma.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.271 23:45:53 nvmf_rdma.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.271 23:45:53 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@296 -- # e810=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@297 -- # x722=() 00:20:09.531 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@298 -- # mlx=() 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:09.532 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:09.532 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:09.532 Found net devices under 0000:da:00.0: mlx_0_0 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:09.532 Found net devices under 0000:da:00.1: mlx_0_1 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@58 -- # uname 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:09.532 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:09.532 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:09.532 altname enp218s0f0np0 00:20:09.532 altname ens818f0np0 00:20:09.532 inet 192.168.100.8/24 scope global mlx_0_0 00:20:09.532 valid_lft forever preferred_lft forever 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:09.532 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:09.791 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:09.791 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:09.791 altname enp218s0f1np1 00:20:09.791 altname ens818f1np1 00:20:09.791 inet 192.168.100.9/24 scope global mlx_0_1 00:20:09.791 valid_lft forever preferred_lft forever 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@422 -- # return 0 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@105 -- # continue 2 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:09.791 192.168.100.9' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:09.791 192.168.100.9' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # head -n 1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:09.791 192.168.100.9' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # head -n 1 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # tail -n +2 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:09.791 23:45:58 nvmf_rdma.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@481 -- # nvmfpid=1512925 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:09.791 23:45:58 nvmf_rdma.dma -- nvmf/common.sh@482 -- # waitforlisten 1512925 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@823 -- # '[' -z 1512925 ']' 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:09.791 23:45:58 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:09.791 [2024-07-15 23:45:58.664221] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:09.791 [2024-07-15 23:45:58.664265] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.791 [2024-07-15 23:45:58.719064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:10.049 [2024-07-15 23:45:58.791144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.049 [2024-07-15 23:45:58.791181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.049 [2024-07-15 23:45:58.791188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.049 [2024-07-15 23:45:58.791193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.049 [2024-07-15 23:45:58.791198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.049 [2024-07-15 23:45:58.791263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.049 [2024-07-15 23:45:58.791266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@856 -- # return 0 00:20:10.626 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.626 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.626 23:45:59 nvmf_rdma.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:10.626 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.626 [2024-07-15 23:45:59.522418] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x10bc3c0/0x10c08b0) succeed. 00:20:10.626 [2024-07-15 23:45:59.531222] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x10bd870/0x1101f40) succeed. 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 Malloc0 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:10.931 [2024-07-15 23:45:59.678232] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:10.931 23:45:59 nvmf_rdma.dma -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:10.931 23:45:59 nvmf_rdma.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@532 -- # config=() 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@532 -- # local subsystem config 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:10.931 { 00:20:10.931 "params": { 00:20:10.931 "name": "Nvme$subsystem", 00:20:10.931 "trtype": "$TEST_TRANSPORT", 00:20:10.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.931 "adrfam": "ipv4", 00:20:10.931 "trsvcid": "$NVMF_PORT", 00:20:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.931 "hdgst": ${hdgst:-false}, 00:20:10.931 "ddgst": ${ddgst:-false} 00:20:10.931 }, 00:20:10.931 "method": "bdev_nvme_attach_controller" 00:20:10.931 } 00:20:10.931 EOF 00:20:10.931 )") 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@554 -- # cat 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@556 -- # jq . 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@557 -- # IFS=, 00:20:10.931 23:45:59 nvmf_rdma.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.931 "params": { 00:20:10.931 "name": "Nvme0", 00:20:10.931 "trtype": "rdma", 00:20:10.931 "traddr": "192.168.100.8", 00:20:10.931 "adrfam": "ipv4", 00:20:10.931 "trsvcid": "4420", 00:20:10.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:10.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:10.931 "hdgst": false, 00:20:10.931 "ddgst": false 00:20:10.931 }, 00:20:10.931 "method": "bdev_nvme_attach_controller" 00:20:10.931 }' 00:20:10.931 [2024-07-15 23:45:59.721935] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:10.931 [2024-07-15 23:45:59.721982] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513172 ] 00:20:10.931 [2024-07-15 23:45:59.771160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:10.931 [2024-07-15 23:45:59.844490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.931 [2024-07-15 23:45:59.844492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.497 bdev Nvme0n1 reports 1 memory domains 00:20:17.497 bdev Nvme0n1 supports RDMA memory domain 00:20:17.497 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:17.497 ========================================================================== 00:20:17.497 Latency [us] 00:20:17.497 IOPS MiB/s Average min max 00:20:17.497 Core 2: 21295.73 83.19 750.44 256.09 8642.76 00:20:17.497 Core 3: 21511.85 84.03 742.93 254.03 8739.12 00:20:17.497 ========================================================================== 00:20:17.497 Total : 42807.59 167.22 746.67 254.03 8739.12 00:20:17.497 00:20:17.497 Total operations: 214117, translate 214117 pull_push 0 memzero 0 00:20:17.498 23:46:05 nvmf_rdma.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:17.498 23:46:05 nvmf_rdma.dma -- host/dma.sh@107 -- # gen_malloc_json 00:20:17.498 23:46:05 nvmf_rdma.dma -- host/dma.sh@21 -- # jq . 00:20:17.498 [2024-07-15 23:46:05.271848] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:17.498 [2024-07-15 23:46:05.271900] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514097 ] 00:20:17.498 [2024-07-15 23:46:05.320904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:17.498 [2024-07-15 23:46:05.391273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:17.498 [2024-07-15 23:46:05.391276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.765 bdev Malloc0 reports 2 memory domains 00:20:22.765 bdev Malloc0 doesn't support RDMA memory domain 00:20:22.765 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:22.765 ========================================================================== 00:20:22.765 Latency [us] 00:20:22.765 IOPS MiB/s Average min max 00:20:22.765 Core 2: 14409.96 56.29 1109.61 461.32 3388.36 00:20:22.765 Core 3: 14374.57 56.15 1112.31 428.36 4201.23 00:20:22.765 ========================================================================== 00:20:22.765 Total : 28784.53 112.44 1110.96 428.36 4201.23 00:20:22.765 00:20:22.765 Total operations: 143971, translate 0 pull_push 575884 memzero 0 00:20:22.765 23:46:10 nvmf_rdma.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:22.765 23:46:10 nvmf_rdma.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:22.765 23:46:10 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:22.765 23:46:10 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:20:22.765 Ignoring -M option 00:20:22.765 [2024-07-15 23:46:10.740062] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:22.765 [2024-07-15 23:46:10.740110] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514966 ] 00:20:22.765 [2024-07-15 23:46:10.790528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:22.765 [2024-07-15 23:46:10.864236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.765 [2024-07-15 23:46:10.864239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.024 bdev 2b96dc25-1e72-459f-9869-f1f401028723 reports 1 memory domains 00:20:28.024 bdev 2b96dc25-1e72-459f-9869-f1f401028723 supports RDMA memory domain 00:20:28.024 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:28.024 ========================================================================== 00:20:28.024 Latency [us] 00:20:28.024 IOPS MiB/s Average min max 00:20:28.024 Core 2: 80647.03 315.03 197.71 77.17 3054.54 00:20:28.024 Core 3: 82253.25 321.30 193.83 74.74 2981.83 00:20:28.024 ========================================================================== 00:20:28.024 Total : 162900.29 636.33 195.75 74.74 3054.54 00:20:28.024 00:20:28.024 Total operations: 814592, translate 0 pull_push 0 memzero 814592 00:20:28.024 23:46:16 nvmf_rdma.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:28.024 [2024-07-15 23:46:16.406266] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:29.927 Initializing NVMe Controllers 00:20:29.927 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:29.927 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:29.927 Initialization complete. Launching workers. 00:20:29.927 ======================================================== 00:20:29.927 Latency(us) 00:20:29.927 Device Information : IOPS MiB/s Average min max 00:20:29.927 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2008.90 7.85 7964.16 5986.69 8974.99 00:20:29.927 ======================================================== 00:20:29.927 Total : 2008.90 7.85 7964.16 5986.69 8974.99 00:20:29.927 00:20:29.927 23:46:18 nvmf_rdma.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:29.927 23:46:18 nvmf_rdma.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:29.927 23:46:18 nvmf_rdma.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:29.927 23:46:18 nvmf_rdma.dma -- host/dma.sh@50 -- # jq . 00:20:29.927 [2024-07-15 23:46:18.731477] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:29.927 [2024-07-15 23:46:18.731527] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1516172 ] 00:20:29.927 [2024-07-15 23:46:18.780172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.927 [2024-07-15 23:46:18.854238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.927 [2024-07-15 23:46:18.854239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.486 bdev af19faf1-cf0a-4a68-86b2-ef8da474efe2 reports 1 memory domains 00:20:36.486 bdev af19faf1-cf0a-4a68-86b2-ef8da474efe2 supports RDMA memory domain 00:20:36.487 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:36.487 ========================================================================== 00:20:36.487 Latency [us] 00:20:36.487 IOPS MiB/s Average min max 00:20:36.487 Core 2: 18827.59 73.55 849.07 38.28 9458.78 00:20:36.487 Core 3: 19105.11 74.63 836.78 22.11 9747.72 00:20:36.487 ========================================================================== 00:20:36.487 Total : 37932.70 148.17 842.88 22.11 9747.72 00:20:36.487 00:20:36.487 Total operations: 189714, translate 189607 pull_push 0 memzero 107 00:20:36.487 23:46:24 nvmf_rdma.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:36.487 23:46:24 nvmf_rdma.dma -- host/dma.sh@120 -- # nvmftestfini 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@117 -- # sync 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@120 -- # set +e 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:36.487 rmmod nvme_rdma 00:20:36.487 rmmod nvme_fabrics 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@124 -- # set -e 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@125 -- # return 0 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@489 -- # '[' -n 1512925 ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@490 -- # killprocess 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@942 -- # '[' -z 1512925 ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@946 -- # kill -0 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@947 -- # uname 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1512925' 00:20:36.487 killing process with pid 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@961 -- # kill 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@966 -- # wait 1512925 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:36.487 23:46:24 nvmf_rdma.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:36.487 00:20:36.487 real 0m31.637s 00:20:36.487 user 1m36.195s 00:20:36.487 sys 0m5.118s 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:36.487 23:46:24 nvmf_rdma.dma -- common/autotest_common.sh@10 -- # set +x 00:20:36.487 ************************************ 00:20:36.487 END TEST dma 00:20:36.487 ************************************ 00:20:36.487 23:46:24 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:20:36.487 23:46:24 nvmf_rdma -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:36.487 23:46:24 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:36.487 23:46:24 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:36.487 23:46:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:36.487 ************************************ 00:20:36.487 START TEST nvmf_identify 00:20:36.487 ************************************ 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:36.487 * Looking for test storage... 00:20:36.487 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:36.487 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:36.488 23:46:24 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:40.672 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:40.672 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:40.672 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:40.673 Found net devices under 0000:da:00.0: mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:40.673 Found net devices under 0000:da:00.1: mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:40.673 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.673 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:40.673 altname enp218s0f0np0 00:20:40.673 altname ens818f0np0 00:20:40.673 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.673 valid_lft forever preferred_lft forever 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:40.673 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.673 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:40.673 altname enp218s0f1np1 00:20:40.673 altname ens818f1np1 00:20:40.673 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.673 valid_lft forever preferred_lft forever 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.673 192.168.100.9' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:40.673 192.168.100.9' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:40.673 192.168.100.9' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:40.673 23:46:29 nvmf_rdma.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1520138 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1520138 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@823 -- # '[' -z 1520138 ']' 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:40.933 23:46:29 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:40.933 [2024-07-15 23:46:29.723416] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:40.933 [2024-07-15 23:46:29.723461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.933 [2024-07-15 23:46:29.777700] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.933 [2024-07-15 23:46:29.858001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.933 [2024-07-15 23:46:29.858039] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.933 [2024-07-15 23:46:29.858046] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.933 [2024-07-15 23:46:29.858052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.933 [2024-07-15 23:46:29.858057] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.933 [2024-07-15 23:46:29.858112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.933 [2024-07-15 23:46:29.858205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.933 [2024-07-15 23:46:29.858232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.933 [2024-07-15 23:46:29.858231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@856 -- # return 0 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 [2024-07-15 23:46:30.567312] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xbbdcc0/0xbc21b0) succeed. 00:20:41.869 [2024-07-15 23:46:30.576469] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xbbf300/0xc03840) succeed. 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 Malloc0 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 [2024-07-15 23:46:30.776962] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:41.869 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:41.869 [ 00:20:41.869 { 00:20:41.869 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:41.869 "subtype": "Discovery", 00:20:41.869 "listen_addresses": [ 00:20:41.869 { 00:20:41.869 "trtype": "RDMA", 00:20:41.869 "adrfam": "IPv4", 00:20:41.869 "traddr": "192.168.100.8", 00:20:41.869 "trsvcid": "4420" 00:20:41.869 } 00:20:41.869 ], 00:20:41.869 "allow_any_host": true, 00:20:41.870 "hosts": [] 00:20:41.870 }, 00:20:41.870 { 00:20:41.870 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.870 "subtype": "NVMe", 00:20:41.870 "listen_addresses": [ 00:20:41.870 { 00:20:41.870 "trtype": "RDMA", 00:20:41.870 "adrfam": "IPv4", 00:20:41.870 "traddr": "192.168.100.8", 00:20:41.870 "trsvcid": "4420" 00:20:41.870 } 00:20:41.870 ], 00:20:41.870 "allow_any_host": true, 00:20:41.870 "hosts": [], 00:20:41.870 "serial_number": "SPDK00000000000001", 00:20:41.870 "model_number": "SPDK bdev Controller", 00:20:41.870 "max_namespaces": 32, 00:20:41.870 "min_cntlid": 1, 00:20:41.870 "max_cntlid": 65519, 00:20:41.870 "namespaces": [ 00:20:41.870 { 00:20:41.870 "nsid": 1, 00:20:41.870 "bdev_name": "Malloc0", 00:20:41.870 "name": "Malloc0", 00:20:41.870 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:41.870 "eui64": "ABCDEF0123456789", 00:20:41.870 "uuid": "04a2440e-c559-4764-97c4-35beeb2d08da" 00:20:41.870 } 00:20:41.870 ] 00:20:41.870 } 00:20:41.870 ] 00:20:41.870 23:46:30 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:41.870 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:41.870 [2024-07-15 23:46:30.827153] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:41.870 [2024-07-15 23:46:30.827186] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520385 ] 00:20:42.137 [2024-07-15 23:46:30.868746] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:42.137 [2024-07-15 23:46:30.868824] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:42.137 [2024-07-15 23:46:30.868837] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:42.137 [2024-07-15 23:46:30.868841] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:42.137 [2024-07-15 23:46:30.868867] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:42.137 [2024-07-15 23:46:30.888043] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:42.137 [2024-07-15 23:46:30.898365] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:42.137 [2024-07-15 23:46:30.898375] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:42.137 [2024-07-15 23:46:30.898381] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898386] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898391] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898395] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898399] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898404] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898408] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898412] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898416] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898420] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898424] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898428] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898433] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898437] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898441] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898445] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898449] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898453] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898460] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898465] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898469] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898473] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898477] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898481] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898486] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898490] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898494] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898498] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898502] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898506] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898510] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898514] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:42.137 [2024-07-15 23:46:30.898518] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:42.137 [2024-07-15 23:46:30.898521] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:42.137 [2024-07-15 23:46:30.898543] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.137 [2024-07-15 23:46:30.898556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:20:42.137 [2024-07-15 23:46:30.903545] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.137 [2024-07-15 23:46:30.903554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903560] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903566] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:42.138 [2024-07-15 23:46:30.903572] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:42.138 [2024-07-15 23:46:30.903577] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:42.138 [2024-07-15 23:46:30.903589] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903617] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903627] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:42.138 [2024-07-15 23:46:30.903631] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903635] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:42.138 [2024-07-15 23:46:30.903646] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903670] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903679] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:42.138 [2024-07-15 23:46:30.903683] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903694] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903726] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903735] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903739] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903746] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903768] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903777] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:42.138 [2024-07-15 23:46:30.903781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903785] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903895] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:42.138 [2024-07-15 23:46:30.903899] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903907] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903933] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903943] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:42.138 [2024-07-15 23:46:30.903947] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903953] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.903973] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.903977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.903982] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:42.138 [2024-07-15 23:46:30.903986] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:42.138 [2024-07-15 23:46:30.903989] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.903995] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:42.138 [2024-07-15 23:46:30.904001] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:42.138 [2024-07-15 23:46:30.904009] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.904015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:20:42.138 [2024-07-15 23:46:30.904050] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.904054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.904061] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:42.138 [2024-07-15 23:46:30.904066] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:42.138 [2024-07-15 23:46:30.904069] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:42.138 [2024-07-15 23:46:30.904075] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:42.138 [2024-07-15 23:46:30.904080] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:42.138 [2024-07-15 23:46:30.904083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:42.138 [2024-07-15 23:46:30.904087] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.904093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:42.138 [2024-07-15 23:46:30.904099] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.138 [2024-07-15 23:46:30.904105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.138 [2024-07-15 23:46:30.904124] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.138 [2024-07-15 23:46:30.904128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:42.138 [2024-07-15 23:46:30.904138] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.139 [2024-07-15 23:46:30.904148] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.139 [2024-07-15 23:46:30.904158] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.139 [2024-07-15 23:46:30.904168] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.139 [2024-07-15 23:46:30.904176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:42.139 [2024-07-15 23:46:30.904180] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904189] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:42.139 [2024-07-15 23:46:30.904194] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.139 [2024-07-15 23:46:30.904215] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904224] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:42.139 [2024-07-15 23:46:30.904230] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:42.139 [2024-07-15 23:46:30.904234] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904241] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:20:42.139 [2024-07-15 23:46:30.904271] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904280] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904288] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:42.139 [2024-07-15 23:46:30.904310] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x180100 00:20:42.139 [2024-07-15 23:46:30.904322] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.139 [2024-07-15 23:46:30.904353] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904366] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x180100 00:20:42.139 [2024-07-15 23:46:30.904376] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904380] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904388] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904402] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904414] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x180100 00:20:42.139 [2024-07-15 23:46:30.904424] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.139 [2024-07-15 23:46:30.904444] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.139 [2024-07-15 23:46:30.904448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:42.139 [2024-07-15 23:46:30.904456] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.139 ===================================================== 00:20:42.139 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:42.139 ===================================================== 00:20:42.139 Controller Capabilities/Features 00:20:42.139 ================================ 00:20:42.139 Vendor ID: 0000 00:20:42.139 Subsystem Vendor ID: 0000 00:20:42.139 Serial Number: .................... 00:20:42.139 Model Number: ........................................ 00:20:42.139 Firmware Version: 24.09 00:20:42.139 Recommended Arb Burst: 0 00:20:42.139 IEEE OUI Identifier: 00 00 00 00:20:42.139 Multi-path I/O 00:20:42.139 May have multiple subsystem ports: No 00:20:42.139 May have multiple controllers: No 00:20:42.139 Associated with SR-IOV VF: No 00:20:42.139 Max Data Transfer Size: 131072 00:20:42.139 Max Number of Namespaces: 0 00:20:42.139 Max Number of I/O Queues: 1024 00:20:42.139 NVMe Specification Version (VS): 1.3 00:20:42.139 NVMe Specification Version (Identify): 1.3 00:20:42.139 Maximum Queue Entries: 128 00:20:42.139 Contiguous Queues Required: Yes 00:20:42.139 Arbitration Mechanisms Supported 00:20:42.139 Weighted Round Robin: Not Supported 00:20:42.139 Vendor Specific: Not Supported 00:20:42.139 Reset Timeout: 15000 ms 00:20:42.139 Doorbell Stride: 4 bytes 00:20:42.139 NVM Subsystem Reset: Not Supported 00:20:42.139 Command Sets Supported 00:20:42.139 NVM Command Set: Supported 00:20:42.139 Boot Partition: Not Supported 00:20:42.139 Memory Page Size Minimum: 4096 bytes 00:20:42.139 Memory Page Size Maximum: 4096 bytes 00:20:42.139 Persistent Memory Region: Not Supported 00:20:42.139 Optional Asynchronous Events Supported 00:20:42.139 Namespace Attribute Notices: Not Supported 00:20:42.139 Firmware Activation Notices: Not Supported 00:20:42.140 ANA Change Notices: Not Supported 00:20:42.140 PLE Aggregate Log Change Notices: Not Supported 00:20:42.140 LBA Status Info Alert Notices: Not Supported 00:20:42.140 EGE Aggregate Log Change Notices: Not Supported 00:20:42.140 Normal NVM Subsystem Shutdown event: Not Supported 00:20:42.140 Zone Descriptor Change Notices: Not Supported 00:20:42.140 Discovery Log Change Notices: Supported 00:20:42.140 Controller Attributes 00:20:42.140 128-bit Host Identifier: Not Supported 00:20:42.140 Non-Operational Permissive Mode: Not Supported 00:20:42.140 NVM Sets: Not Supported 00:20:42.140 Read Recovery Levels: Not Supported 00:20:42.140 Endurance Groups: Not Supported 00:20:42.140 Predictable Latency Mode: Not Supported 00:20:42.140 Traffic Based Keep ALive: Not Supported 00:20:42.140 Namespace Granularity: Not Supported 00:20:42.140 SQ Associations: Not Supported 00:20:42.140 UUID List: Not Supported 00:20:42.140 Multi-Domain Subsystem: Not Supported 00:20:42.140 Fixed Capacity Management: Not Supported 00:20:42.140 Variable Capacity Management: Not Supported 00:20:42.140 Delete Endurance Group: Not Supported 00:20:42.140 Delete NVM Set: Not Supported 00:20:42.140 Extended LBA Formats Supported: Not Supported 00:20:42.140 Flexible Data Placement Supported: Not Supported 00:20:42.140 00:20:42.140 Controller Memory Buffer Support 00:20:42.140 ================================ 00:20:42.140 Supported: No 00:20:42.140 00:20:42.140 Persistent Memory Region Support 00:20:42.140 ================================ 00:20:42.140 Supported: No 00:20:42.140 00:20:42.140 Admin Command Set Attributes 00:20:42.140 ============================ 00:20:42.140 Security Send/Receive: Not Supported 00:20:42.140 Format NVM: Not Supported 00:20:42.140 Firmware Activate/Download: Not Supported 00:20:42.140 Namespace Management: Not Supported 00:20:42.140 Device Self-Test: Not Supported 00:20:42.140 Directives: Not Supported 00:20:42.140 NVMe-MI: Not Supported 00:20:42.140 Virtualization Management: Not Supported 00:20:42.140 Doorbell Buffer Config: Not Supported 00:20:42.140 Get LBA Status Capability: Not Supported 00:20:42.140 Command & Feature Lockdown Capability: Not Supported 00:20:42.140 Abort Command Limit: 1 00:20:42.140 Async Event Request Limit: 4 00:20:42.140 Number of Firmware Slots: N/A 00:20:42.140 Firmware Slot 1 Read-Only: N/A 00:20:42.140 Firmware Activation Without Reset: N/A 00:20:42.140 Multiple Update Detection Support: N/A 00:20:42.140 Firmware Update Granularity: No Information Provided 00:20:42.140 Per-Namespace SMART Log: No 00:20:42.140 Asymmetric Namespace Access Log Page: Not Supported 00:20:42.140 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:42.140 Command Effects Log Page: Not Supported 00:20:42.140 Get Log Page Extended Data: Supported 00:20:42.140 Telemetry Log Pages: Not Supported 00:20:42.140 Persistent Event Log Pages: Not Supported 00:20:42.140 Supported Log Pages Log Page: May Support 00:20:42.140 Commands Supported & Effects Log Page: Not Supported 00:20:42.140 Feature Identifiers & Effects Log Page:May Support 00:20:42.140 NVMe-MI Commands & Effects Log Page: May Support 00:20:42.140 Data Area 4 for Telemetry Log: Not Supported 00:20:42.140 Error Log Page Entries Supported: 128 00:20:42.140 Keep Alive: Not Supported 00:20:42.140 00:20:42.140 NVM Command Set Attributes 00:20:42.140 ========================== 00:20:42.140 Submission Queue Entry Size 00:20:42.140 Max: 1 00:20:42.140 Min: 1 00:20:42.140 Completion Queue Entry Size 00:20:42.140 Max: 1 00:20:42.140 Min: 1 00:20:42.140 Number of Namespaces: 0 00:20:42.140 Compare Command: Not Supported 00:20:42.140 Write Uncorrectable Command: Not Supported 00:20:42.140 Dataset Management Command: Not Supported 00:20:42.140 Write Zeroes Command: Not Supported 00:20:42.140 Set Features Save Field: Not Supported 00:20:42.140 Reservations: Not Supported 00:20:42.140 Timestamp: Not Supported 00:20:42.140 Copy: Not Supported 00:20:42.140 Volatile Write Cache: Not Present 00:20:42.140 Atomic Write Unit (Normal): 1 00:20:42.140 Atomic Write Unit (PFail): 1 00:20:42.140 Atomic Compare & Write Unit: 1 00:20:42.140 Fused Compare & Write: Supported 00:20:42.140 Scatter-Gather List 00:20:42.140 SGL Command Set: Supported 00:20:42.140 SGL Keyed: Supported 00:20:42.140 SGL Bit Bucket Descriptor: Not Supported 00:20:42.140 SGL Metadata Pointer: Not Supported 00:20:42.140 Oversized SGL: Not Supported 00:20:42.140 SGL Metadata Address: Not Supported 00:20:42.140 SGL Offset: Supported 00:20:42.140 Transport SGL Data Block: Not Supported 00:20:42.140 Replay Protected Memory Block: Not Supported 00:20:42.140 00:20:42.140 Firmware Slot Information 00:20:42.140 ========================= 00:20:42.140 Active slot: 0 00:20:42.140 00:20:42.140 00:20:42.140 Error Log 00:20:42.140 ========= 00:20:42.140 00:20:42.140 Active Namespaces 00:20:42.140 ================= 00:20:42.140 Discovery Log Page 00:20:42.140 ================== 00:20:42.140 Generation Counter: 2 00:20:42.140 Number of Records: 2 00:20:42.140 Record Format: 0 00:20:42.140 00:20:42.140 Discovery Log Entry 0 00:20:42.140 ---------------------- 00:20:42.140 Transport Type: 1 (RDMA) 00:20:42.140 Address Family: 1 (IPv4) 00:20:42.140 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:42.140 Entry Flags: 00:20:42.140 Duplicate Returned Information: 1 00:20:42.140 Explicit Persistent Connection Support for Discovery: 1 00:20:42.140 Transport Requirements: 00:20:42.141 Secure Channel: Not Required 00:20:42.141 Port ID: 0 (0x0000) 00:20:42.141 Controller ID: 65535 (0xffff) 00:20:42.141 Admin Max SQ Size: 128 00:20:42.141 Transport Service Identifier: 4420 00:20:42.141 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:42.141 Transport Address: 192.168.100.8 00:20:42.141 Transport Specific Address Subtype - RDMA 00:20:42.141 RDMA QP Service Type: 1 (Reliable Connected) 00:20:42.141 RDMA Provider Type: 1 (No provider specified) 00:20:42.141 RDMA CM Service: 1 (RDMA_CM) 00:20:42.141 Discovery Log Entry 1 00:20:42.141 ---------------------- 00:20:42.141 Transport Type: 1 (RDMA) 00:20:42.141 Address Family: 1 (IPv4) 00:20:42.141 Subsystem Type: 2 (NVM Subsystem) 00:20:42.141 Entry Flags: 00:20:42.141 Duplicate Returned Information: 0 00:20:42.141 Explicit Persistent Connection Support for Discovery: 0 00:20:42.141 Transport Requirements: 00:20:42.141 Secure Channel: Not Required 00:20:42.141 Port ID: 0 (0x0000) 00:20:42.141 Controller ID: 65535 (0xffff) 00:20:42.141 Admin Max SQ Size: [2024-07-15 23:46:30.904521] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:42.141 [2024-07-15 23:46:30.904529] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 6154 doesn't match qid 00:20:42.141 [2024-07-15 23:46:30.904546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32599 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904551] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 6154 doesn't match qid 00:20:42.141 [2024-07-15 23:46:30.904557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32599 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904561] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 6154 doesn't match qid 00:20:42.141 [2024-07-15 23:46:30.904567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32599 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904572] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 6154 doesn't match qid 00:20:42.141 [2024-07-15 23:46:30.904577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32599 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904585] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904610] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904624] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904634] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904659] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904669] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:42.141 [2024-07-15 23:46:30.904674] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:42.141 [2024-07-15 23:46:30.904678] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904685] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904708] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904717] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904724] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904745] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904753] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904760] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904786] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904794] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904801] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904828] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904836] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904845] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904869] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:42.141 [2024-07-15 23:46:30.904878] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904884] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.141 [2024-07-15 23:46:30.904890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.141 [2024-07-15 23:46:30.904909] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.141 [2024-07-15 23:46:30.904914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.904918] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.904925] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.904930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.904947] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.904951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.904955] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.904962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.904968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.904984] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.904988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.904993] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.904999] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905027] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905035] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905042] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905069] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905077] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905085] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905108] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905116] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905123] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905148] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905157] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905163] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905184] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905192] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905199] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905223] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905231] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905238] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905265] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905274] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905280] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905301] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905311] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905318] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905340] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905348] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905355] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905377] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905386] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905392] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.142 [2024-07-15 23:46:30.905415] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.142 [2024-07-15 23:46:30.905419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:42.142 [2024-07-15 23:46:30.905423] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.142 [2024-07-15 23:46:30.905430] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905452] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905461] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905467] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905496] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905504] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905511] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905536] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905549] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905556] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905581] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905590] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905597] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905624] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905632] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905639] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905661] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905669] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905676] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905706] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905715] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905721] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905727] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905746] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905755] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905761] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905787] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:42.143 [2024-07-15 23:46:30.905797] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905803] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.143 [2024-07-15 23:46:30.905809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.143 [2024-07-15 23:46:30.905826] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.143 [2024-07-15 23:46:30.905830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.905834] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905841] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.905863] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.905867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.905872] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905878] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905884] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.905903] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.905907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.905912] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905918] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.905942] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.905947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.905951] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905957] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.905983] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.905987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.905991] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.905998] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906024] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906033] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906040] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906067] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906075] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906082] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906106] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906114] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906121] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906148] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906156] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906163] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906182] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906191] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906197] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906220] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906235] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906265] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906273] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906280] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906305] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906314] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906320] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.144 [2024-07-15 23:46:30.906341] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.144 [2024-07-15 23:46:30.906345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:42.144 [2024-07-15 23:46:30.906350] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.144 [2024-07-15 23:46:30.906356] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906382] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906390] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906397] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906403] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906422] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906431] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906438] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906466] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906474] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906481] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906502] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906510] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906516] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906541] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906550] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906557] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906578] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906586] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906593] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906618] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906627] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906634] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906655] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906663] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906670] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906694] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906702] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906709] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906716] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906734] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906743] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906749] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906773] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906782] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906788] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906812] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906821] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906827] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906856] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906864] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906871] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906894] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906902] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906909] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.145 [2024-07-15 23:46:30.906914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.145 [2024-07-15 23:46:30.906931] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.145 [2024-07-15 23:46:30.906936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:42.145 [2024-07-15 23:46:30.906940] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.906947] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.906954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.906973] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.906978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.906982] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.906989] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.906994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907014] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907023] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907030] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907056] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907065] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907072] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907100] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907108] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907115] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907121] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907142] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907150] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907157] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907180] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907188] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907198] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907219] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907227] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907234] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907255] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907263] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907270] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907291] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907299] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907306] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907329] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907337] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907344] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907349] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907369] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907378] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907384] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907406] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907415] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907423] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907447] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907456] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907462] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907485] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907493] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907500] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.907521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.907525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.907529] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.907536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.911549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.146 [2024-07-15 23:46:30.911565] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.146 [2024-07-15 23:46:30.911570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:001a p:0 m:0 dnr:0 00:20:42.146 [2024-07-15 23:46:30.911574] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.146 [2024-07-15 23:46:30.911579] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:42.146 128 00:20:42.146 Transport Service Identifier: 4420 00:20:42.147 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:42.147 Transport Address: 192.168.100.8 00:20:42.147 Transport Specific Address Subtype - RDMA 00:20:42.147 RDMA QP Service Type: 1 (Reliable Connected) 00:20:42.147 RDMA Provider Type: 1 (No provider specified) 00:20:42.147 RDMA CM Service: 1 (RDMA_CM) 00:20:42.147 23:46:30 nvmf_rdma.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:42.147 [2024-07-15 23:46:30.981953] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:42.147 [2024-07-15 23:46:30.981992] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520391 ] 00:20:42.147 [2024-07-15 23:46:31.022645] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:42.147 [2024-07-15 23:46:31.022711] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:42.147 [2024-07-15 23:46:31.022723] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:42.147 [2024-07-15 23:46:31.022727] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:42.147 [2024-07-15 23:46:31.022747] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:42.147 [2024-07-15 23:46:31.033681] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:42.147 [2024-07-15 23:46:31.047938] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:42.147 [2024-07-15 23:46:31.047947] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:42.147 [2024-07-15 23:46:31.047953] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047958] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047963] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047967] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047972] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047976] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047980] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047984] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047988] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047992] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.047997] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048001] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048005] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048009] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048013] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048018] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048022] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048026] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048030] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048034] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048038] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048043] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048049] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048053] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048058] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048062] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048066] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048070] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048074] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048079] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048083] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048087] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:42.147 [2024-07-15 23:46:31.048090] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:42.147 [2024-07-15 23:46:31.048093] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:42.147 [2024-07-15 23:46:31.048107] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.048117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x180100 00:20:42.147 [2024-07-15 23:46:31.053545] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.147 [2024-07-15 23:46:31.053553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.147 [2024-07-15 23:46:31.053558] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053563] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:42.147 [2024-07-15 23:46:31.053569] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:42.147 [2024-07-15 23:46:31.053574] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:42.147 [2024-07-15 23:46:31.053584] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.147 [2024-07-15 23:46:31.053615] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.147 [2024-07-15 23:46:31.053619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:42.147 [2024-07-15 23:46:31.053624] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:42.147 [2024-07-15 23:46:31.053628] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053632] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:42.147 [2024-07-15 23:46:31.053638] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053644] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.147 [2024-07-15 23:46:31.053664] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.147 [2024-07-15 23:46:31.053668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:42.147 [2024-07-15 23:46:31.053674] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:42.147 [2024-07-15 23:46:31.053678] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053683] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:42.147 [2024-07-15 23:46:31.053689] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053695] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.147 [2024-07-15 23:46:31.053716] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.147 [2024-07-15 23:46:31.053721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:42.147 [2024-07-15 23:46:31.053725] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:42.147 [2024-07-15 23:46:31.053729] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053736] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.147 [2024-07-15 23:46:31.053756] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.147 [2024-07-15 23:46:31.053760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:42.147 [2024-07-15 23:46:31.053764] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:42.147 [2024-07-15 23:46:31.053768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:42.147 [2024-07-15 23:46:31.053772] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053777] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:42.147 [2024-07-15 23:46:31.053881] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:42.147 [2024-07-15 23:46:31.053885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:42.147 [2024-07-15 23:46:31.053891] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.147 [2024-07-15 23:46:31.053897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.148 [2024-07-15 23:46:31.053914] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.053919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.053923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:42.148 [2024-07-15 23:46:31.053927] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.053933] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.053939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.148 [2024-07-15 23:46:31.053955] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.053959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.053963] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:42.148 [2024-07-15 23:46:31.053967] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.053971] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.053976] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:42.148 [2024-07-15 23:46:31.053984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.053991] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.053997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:20:42.148 [2024-07-15 23:46:31.054040] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054051] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:42.148 [2024-07-15 23:46:31.054055] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:42.148 [2024-07-15 23:46:31.054058] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:42.148 [2024-07-15 23:46:31.054062] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:42.148 [2024-07-15 23:46:31.054066] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:42.148 [2024-07-15 23:46:31.054069] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054073] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054079] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054085] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.148 [2024-07-15 23:46:31.054109] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054119] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.148 [2024-07-15 23:46:31.054129] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.148 [2024-07-15 23:46:31.054139] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.148 [2024-07-15 23:46:31.054151] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.148 [2024-07-15 23:46:31.054159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054163] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054177] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.148 [2024-07-15 23:46:31.054202] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054211] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:42.148 [2024-07-15 23:46:31.054217] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054221] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054226] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054237] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.148 [2024-07-15 23:46:31.054256] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054312] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054325] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x180100 00:20:42.148 [2024-07-15 23:46:31.054355] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054371] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:42.148 [2024-07-15 23:46:31.054379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054384] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054389] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054395] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:20:42.148 [2024-07-15 23:46:31.054435] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.148 [2024-07-15 23:46:31.054439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:42.148 [2024-07-15 23:46:31.054449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054453] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.148 [2024-07-15 23:46:31.054459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:42.148 [2024-07-15 23:46:31.054466] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x180100 00:20:42.149 [2024-07-15 23:46:31.054496] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054511] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054523] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054536] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054546] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:42.149 [2024-07-15 23:46:31.054550] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:42.149 [2024-07-15 23:46:31.054554] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:42.149 [2024-07-15 23:46:31.054566] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.149 [2024-07-15 23:46:31.054578] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:42.149 [2024-07-15 23:46:31.054592] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054601] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054607] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.149 [2024-07-15 23:46:31.054619] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054627] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054639] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054647] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054653] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.149 [2024-07-15 23:46:31.054678] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054687] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054693] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.149 [2024-07-15 23:46:31.054718] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054726] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054736] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x180100 00:20:42.149 [2024-07-15 23:46:31.054749] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x180100 00:20:42.149 [2024-07-15 23:46:31.054761] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x180100 00:20:42.149 [2024-07-15 23:46:31.054775] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x180100 00:20:42.149 [2024-07-15 23:46:31.054787] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054801] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054806] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054817] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054835] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054844] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.149 [2024-07-15 23:46:31.054848] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.149 [2024-07-15 23:46:31.054852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:42.149 [2024-07-15 23:46:31.054858] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.149 ===================================================== 00:20:42.149 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.149 ===================================================== 00:20:42.149 Controller Capabilities/Features 00:20:42.149 ================================ 00:20:42.149 Vendor ID: 8086 00:20:42.149 Subsystem Vendor ID: 8086 00:20:42.149 Serial Number: SPDK00000000000001 00:20:42.149 Model Number: SPDK bdev Controller 00:20:42.149 Firmware Version: 24.09 00:20:42.149 Recommended Arb Burst: 6 00:20:42.149 IEEE OUI Identifier: e4 d2 5c 00:20:42.149 Multi-path I/O 00:20:42.149 May have multiple subsystem ports: Yes 00:20:42.149 May have multiple controllers: Yes 00:20:42.149 Associated with SR-IOV VF: No 00:20:42.149 Max Data Transfer Size: 131072 00:20:42.149 Max Number of Namespaces: 32 00:20:42.149 Max Number of I/O Queues: 127 00:20:42.149 NVMe Specification Version (VS): 1.3 00:20:42.149 NVMe Specification Version (Identify): 1.3 00:20:42.149 Maximum Queue Entries: 128 00:20:42.149 Contiguous Queues Required: Yes 00:20:42.149 Arbitration Mechanisms Supported 00:20:42.149 Weighted Round Robin: Not Supported 00:20:42.149 Vendor Specific: Not Supported 00:20:42.149 Reset Timeout: 15000 ms 00:20:42.149 Doorbell Stride: 4 bytes 00:20:42.149 NVM Subsystem Reset: Not Supported 00:20:42.149 Command Sets Supported 00:20:42.149 NVM Command Set: Supported 00:20:42.149 Boot Partition: Not Supported 00:20:42.149 Memory Page Size Minimum: 4096 bytes 00:20:42.149 Memory Page Size Maximum: 4096 bytes 00:20:42.149 Persistent Memory Region: Not Supported 00:20:42.149 Optional Asynchronous Events Supported 00:20:42.149 Namespace Attribute Notices: Supported 00:20:42.149 Firmware Activation Notices: Not Supported 00:20:42.149 ANA Change Notices: Not Supported 00:20:42.149 PLE Aggregate Log Change Notices: Not Supported 00:20:42.149 LBA Status Info Alert Notices: Not Supported 00:20:42.149 EGE Aggregate Log Change Notices: Not Supported 00:20:42.149 Normal NVM Subsystem Shutdown event: Not Supported 00:20:42.149 Zone Descriptor Change Notices: Not Supported 00:20:42.149 Discovery Log Change Notices: Not Supported 00:20:42.149 Controller Attributes 00:20:42.149 128-bit Host Identifier: Supported 00:20:42.149 Non-Operational Permissive Mode: Not Supported 00:20:42.149 NVM Sets: Not Supported 00:20:42.149 Read Recovery Levels: Not Supported 00:20:42.149 Endurance Groups: Not Supported 00:20:42.149 Predictable Latency Mode: Not Supported 00:20:42.149 Traffic Based Keep ALive: Not Supported 00:20:42.149 Namespace Granularity: Not Supported 00:20:42.149 SQ Associations: Not Supported 00:20:42.149 UUID List: Not Supported 00:20:42.149 Multi-Domain Subsystem: Not Supported 00:20:42.150 Fixed Capacity Management: Not Supported 00:20:42.150 Variable Capacity Management: Not Supported 00:20:42.150 Delete Endurance Group: Not Supported 00:20:42.150 Delete NVM Set: Not Supported 00:20:42.150 Extended LBA Formats Supported: Not Supported 00:20:42.150 Flexible Data Placement Supported: Not Supported 00:20:42.150 00:20:42.150 Controller Memory Buffer Support 00:20:42.150 ================================ 00:20:42.150 Supported: No 00:20:42.150 00:20:42.150 Persistent Memory Region Support 00:20:42.150 ================================ 00:20:42.150 Supported: No 00:20:42.150 00:20:42.150 Admin Command Set Attributes 00:20:42.150 ============================ 00:20:42.150 Security Send/Receive: Not Supported 00:20:42.150 Format NVM: Not Supported 00:20:42.150 Firmware Activate/Download: Not Supported 00:20:42.150 Namespace Management: Not Supported 00:20:42.150 Device Self-Test: Not Supported 00:20:42.150 Directives: Not Supported 00:20:42.150 NVMe-MI: Not Supported 00:20:42.150 Virtualization Management: Not Supported 00:20:42.150 Doorbell Buffer Config: Not Supported 00:20:42.150 Get LBA Status Capability: Not Supported 00:20:42.150 Command & Feature Lockdown Capability: Not Supported 00:20:42.150 Abort Command Limit: 4 00:20:42.150 Async Event Request Limit: 4 00:20:42.150 Number of Firmware Slots: N/A 00:20:42.150 Firmware Slot 1 Read-Only: N/A 00:20:42.150 Firmware Activation Without Reset: N/A 00:20:42.150 Multiple Update Detection Support: N/A 00:20:42.150 Firmware Update Granularity: No Information Provided 00:20:42.150 Per-Namespace SMART Log: No 00:20:42.150 Asymmetric Namespace Access Log Page: Not Supported 00:20:42.150 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:42.150 Command Effects Log Page: Supported 00:20:42.150 Get Log Page Extended Data: Supported 00:20:42.150 Telemetry Log Pages: Not Supported 00:20:42.150 Persistent Event Log Pages: Not Supported 00:20:42.150 Supported Log Pages Log Page: May Support 00:20:42.150 Commands Supported & Effects Log Page: Not Supported 00:20:42.150 Feature Identifiers & Effects Log Page:May Support 00:20:42.150 NVMe-MI Commands & Effects Log Page: May Support 00:20:42.150 Data Area 4 for Telemetry Log: Not Supported 00:20:42.150 Error Log Page Entries Supported: 128 00:20:42.150 Keep Alive: Supported 00:20:42.150 Keep Alive Granularity: 10000 ms 00:20:42.150 00:20:42.150 NVM Command Set Attributes 00:20:42.150 ========================== 00:20:42.150 Submission Queue Entry Size 00:20:42.150 Max: 64 00:20:42.150 Min: 64 00:20:42.150 Completion Queue Entry Size 00:20:42.150 Max: 16 00:20:42.150 Min: 16 00:20:42.150 Number of Namespaces: 32 00:20:42.150 Compare Command: Supported 00:20:42.150 Write Uncorrectable Command: Not Supported 00:20:42.150 Dataset Management Command: Supported 00:20:42.150 Write Zeroes Command: Supported 00:20:42.150 Set Features Save Field: Not Supported 00:20:42.150 Reservations: Supported 00:20:42.150 Timestamp: Not Supported 00:20:42.150 Copy: Supported 00:20:42.150 Volatile Write Cache: Present 00:20:42.150 Atomic Write Unit (Normal): 1 00:20:42.150 Atomic Write Unit (PFail): 1 00:20:42.150 Atomic Compare & Write Unit: 1 00:20:42.150 Fused Compare & Write: Supported 00:20:42.150 Scatter-Gather List 00:20:42.150 SGL Command Set: Supported 00:20:42.150 SGL Keyed: Supported 00:20:42.150 SGL Bit Bucket Descriptor: Not Supported 00:20:42.150 SGL Metadata Pointer: Not Supported 00:20:42.150 Oversized SGL: Not Supported 00:20:42.150 SGL Metadata Address: Not Supported 00:20:42.150 SGL Offset: Supported 00:20:42.150 Transport SGL Data Block: Not Supported 00:20:42.150 Replay Protected Memory Block: Not Supported 00:20:42.150 00:20:42.150 Firmware Slot Information 00:20:42.150 ========================= 00:20:42.150 Active slot: 1 00:20:42.150 Slot 1 Firmware Revision: 24.09 00:20:42.150 00:20:42.150 00:20:42.150 Commands Supported and Effects 00:20:42.150 ============================== 00:20:42.150 Admin Commands 00:20:42.150 -------------- 00:20:42.150 Get Log Page (02h): Supported 00:20:42.150 Identify (06h): Supported 00:20:42.150 Abort (08h): Supported 00:20:42.150 Set Features (09h): Supported 00:20:42.150 Get Features (0Ah): Supported 00:20:42.150 Asynchronous Event Request (0Ch): Supported 00:20:42.150 Keep Alive (18h): Supported 00:20:42.150 I/O Commands 00:20:42.150 ------------ 00:20:42.150 Flush (00h): Supported LBA-Change 00:20:42.150 Write (01h): Supported LBA-Change 00:20:42.150 Read (02h): Supported 00:20:42.150 Compare (05h): Supported 00:20:42.150 Write Zeroes (08h): Supported LBA-Change 00:20:42.150 Dataset Management (09h): Supported LBA-Change 00:20:42.150 Copy (19h): Supported LBA-Change 00:20:42.150 00:20:42.150 Error Log 00:20:42.150 ========= 00:20:42.150 00:20:42.150 Arbitration 00:20:42.150 =========== 00:20:42.150 Arbitration Burst: 1 00:20:42.150 00:20:42.150 Power Management 00:20:42.150 ================ 00:20:42.150 Number of Power States: 1 00:20:42.150 Current Power State: Power State #0 00:20:42.150 Power State #0: 00:20:42.150 Max Power: 0.00 W 00:20:42.150 Non-Operational State: Operational 00:20:42.150 Entry Latency: Not Reported 00:20:42.150 Exit Latency: Not Reported 00:20:42.150 Relative Read Throughput: 0 00:20:42.150 Relative Read Latency: 0 00:20:42.150 Relative Write Throughput: 0 00:20:42.150 Relative Write Latency: 0 00:20:42.150 Idle Power: Not Reported 00:20:42.150 Active Power: Not Reported 00:20:42.150 Non-Operational Permissive Mode: Not Supported 00:20:42.150 00:20:42.150 Health Information 00:20:42.150 ================== 00:20:42.150 Critical Warnings: 00:20:42.150 Available Spare Space: OK 00:20:42.150 Temperature: OK 00:20:42.150 Device Reliability: OK 00:20:42.150 Read Only: No 00:20:42.150 Volatile Memory Backup: OK 00:20:42.150 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:42.150 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:42.150 Available Spare: 0% 00:20:42.150 Available Spare Threshold: 0% 00:20:42.150 Life Percentage [2024-07-15 23:46:31.054933] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x180100 00:20:42.150 [2024-07-15 23:46:31.054940] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.150 [2024-07-15 23:46:31.054958] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.150 [2024-07-15 23:46:31.054962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:42.150 [2024-07-15 23:46:31.054966] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.150 [2024-07-15 23:46:31.054988] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:42.150 [2024-07-15 23:46:31.054995] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60301 doesn't match qid 00:20:42.150 [2024-07-15 23:46:31.055006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.150 [2024-07-15 23:46:31.055011] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60301 doesn't match qid 00:20:42.150 [2024-07-15 23:46:31.055017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055021] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60301 doesn't match qid 00:20:42.151 [2024-07-15 23:46:31.055027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055031] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 60301 doesn't match qid 00:20:42.151 [2024-07-15 23:46:31.055037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32584 cdw0:5 sqhd:0ad0 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055045] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055067] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055078] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055088] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055108] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055116] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:42.151 [2024-07-15 23:46:31.055120] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:42.151 [2024-07-15 23:46:31.055124] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055131] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055154] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055163] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055169] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055175] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055197] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055205] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055212] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055234] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055243] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055249] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055279] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055288] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055294] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055315] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055324] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055330] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055353] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055361] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055368] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055393] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055402] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055409] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055431] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055439] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055446] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055470] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055478] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055485] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055491] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.151 [2024-07-15 23:46:31.055511] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.151 [2024-07-15 23:46:31.055516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:42.151 [2024-07-15 23:46:31.055520] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.151 [2024-07-15 23:46:31.055527] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055555] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055564] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055571] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055601] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055609] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055616] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055637] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055645] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055652] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055680] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055689] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055695] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055719] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055727] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055734] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055755] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055763] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055770] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055794] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055802] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055809] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055830] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055838] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055845] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055865] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055874] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055880] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055903] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055911] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055918] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055945] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055953] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055960] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.055967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.055990] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.055994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.055998] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.056004] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.056010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.152 [2024-07-15 23:46:31.056033] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.152 [2024-07-15 23:46:31.056037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:42.152 [2024-07-15 23:46:31.056041] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.056048] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.152 [2024-07-15 23:46:31.056054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056067] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056076] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056082] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056109] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056118] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056124] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056151] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056159] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056166] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056186] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056194] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056202] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056226] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056234] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056241] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056261] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056270] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056276] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056306] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056315] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056322] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056350] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056359] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056365] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056394] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056402] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056409] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056436] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056444] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056452] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056476] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056484] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056491] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056519] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056533] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056562] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:42.153 [2024-07-15 23:46:31.056570] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056577] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.153 [2024-07-15 23:46:31.056582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.153 [2024-07-15 23:46:31.056602] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.153 [2024-07-15 23:46:31.056606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056611] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056617] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056644] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056652] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056659] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056683] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056693] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056699] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056720] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056729] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056735] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056757] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056766] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056773] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056799] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056808] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056815] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056842] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056850] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056857] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056880] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056889] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056896] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056925] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056935] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056942] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.154 [2024-07-15 23:46:31.056967] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.154 [2024-07-15 23:46:31.056971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:42.154 [2024-07-15 23:46:31.056976] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056982] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.154 [2024-07-15 23:46:31.056988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057004] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057013] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057019] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057048] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057057] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057063] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057087] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057095] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057102] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057130] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057139] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057145] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057171] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057180] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057187] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057214] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057222] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057229] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057250] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057258] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057265] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057285] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057294] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057300] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057321] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057330] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057336] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057363] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057372] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057378] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057402] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057412] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057419] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057444] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057452] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057459] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.155 [2024-07-15 23:46:31.057486] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.155 [2024-07-15 23:46:31.057490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:42.155 [2024-07-15 23:46:31.057494] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x180100 00:20:42.155 [2024-07-15 23:46:31.057501] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.156 [2024-07-15 23:46:31.057507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.156 [2024-07-15 23:46:31.057522] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.156 [2024-07-15 23:46:31.057526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:42.156 [2024-07-15 23:46:31.057530] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x180100 00:20:42.156 [2024-07-15 23:46:31.057537] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x180100 00:20:42.156 [2024-07-15 23:46:31.061548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:42.156 [2024-07-15 23:46:31.061564] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:42.156 [2024-07-15 23:46:31.061569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:20:42.156 [2024-07-15 23:46:31.061573] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x180100 00:20:42.156 [2024-07-15 23:46:31.061578] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:42.156 Used: 0% 00:20:42.156 Data Units Read: 0 00:20:42.156 Data Units Written: 0 00:20:42.156 Host Read Commands: 0 00:20:42.156 Host Write Commands: 0 00:20:42.156 Controller Busy Time: 0 minutes 00:20:42.156 Power Cycles: 0 00:20:42.156 Power On Hours: 0 hours 00:20:42.156 Unsafe Shutdowns: 0 00:20:42.156 Unrecoverable Media Errors: 0 00:20:42.156 Lifetime Error Log Entries: 0 00:20:42.156 Warning Temperature Time: 0 minutes 00:20:42.156 Critical Temperature Time: 0 minutes 00:20:42.156 00:20:42.156 Number of Queues 00:20:42.156 ================ 00:20:42.156 Number of I/O Submission Queues: 127 00:20:42.156 Number of I/O Completion Queues: 127 00:20:42.156 00:20:42.156 Active Namespaces 00:20:42.156 ================= 00:20:42.156 Namespace ID:1 00:20:42.156 Error Recovery Timeout: Unlimited 00:20:42.156 Command Set Identifier: NVM (00h) 00:20:42.156 Deallocate: Supported 00:20:42.156 Deallocated/Unwritten Error: Not Supported 00:20:42.156 Deallocated Read Value: Unknown 00:20:42.156 Deallocate in Write Zeroes: Not Supported 00:20:42.156 Deallocated Guard Field: 0xFFFF 00:20:42.156 Flush: Supported 00:20:42.156 Reservation: Supported 00:20:42.156 Namespace Sharing Capabilities: Multiple Controllers 00:20:42.156 Size (in LBAs): 131072 (0GiB) 00:20:42.156 Capacity (in LBAs): 131072 (0GiB) 00:20:42.156 Utilization (in LBAs): 131072 (0GiB) 00:20:42.156 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:42.156 EUI64: ABCDEF0123456789 00:20:42.156 UUID: 04a2440e-c559-4764-97c4-35beeb2d08da 00:20:42.156 Thin Provisioning: Not Supported 00:20:42.156 Per-NS Atomic Units: Yes 00:20:42.156 Atomic Boundary Size (Normal): 0 00:20:42.156 Atomic Boundary Size (PFail): 0 00:20:42.156 Atomic Boundary Offset: 0 00:20:42.156 Maximum Single Source Range Length: 65535 00:20:42.156 Maximum Copy Length: 65535 00:20:42.156 Maximum Source Range Count: 1 00:20:42.156 NGUID/EUI64 Never Reused: No 00:20:42.156 Namespace Write Protected: No 00:20:42.156 Number of LBA Formats: 1 00:20:42.156 Current LBA Format: LBA Format #00 00:20:42.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:42.156 00:20:42.156 23:46:31 nvmf_rdma.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:42.156 23:46:31 nvmf_rdma.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.156 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@553 -- # xtrace_disable 00:20:42.156 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:42.415 rmmod nvme_rdma 00:20:42.415 rmmod nvme_fabrics 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1520138 ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1520138 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@942 -- # '[' -z 1520138 ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@946 -- # kill -0 1520138 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@947 -- # uname 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1520138 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1520138' 00:20:42.415 killing process with pid 1520138 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@961 -- # kill 1520138 00:20:42.415 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@966 -- # wait 1520138 00:20:42.674 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.674 23:46:31 nvmf_rdma.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:42.674 00:20:42.674 real 0m6.697s 00:20:42.674 user 0m7.621s 00:20:42.674 sys 0m4.104s 00:20:42.674 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@1118 -- # xtrace_disable 00:20:42.674 23:46:31 nvmf_rdma.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:42.674 ************************************ 00:20:42.674 END TEST nvmf_identify 00:20:42.674 ************************************ 00:20:42.674 23:46:31 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:20:42.674 23:46:31 nvmf_rdma -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:42.674 23:46:31 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:20:42.674 23:46:31 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:20:42.674 23:46:31 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:20:42.674 ************************************ 00:20:42.674 START TEST nvmf_perf 00:20:42.674 ************************************ 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:42.674 * Looking for test storage... 00:20:42.674 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.674 23:46:31 nvmf_rdma.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.675 23:46:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.934 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.934 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.934 23:46:31 nvmf_rdma.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.934 23:46:31 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:48.206 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:48.206 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:48.206 Found net devices under 0000:da:00.0: mlx_0_0 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:48.206 Found net devices under 0000:da:00.1: mlx_0_1 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:48.206 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:48.207 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.207 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:20:48.207 altname enp218s0f0np0 00:20:48.207 altname ens818f0np0 00:20:48.207 inet 192.168.100.8/24 scope global mlx_0_0 00:20:48.207 valid_lft forever preferred_lft forever 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:48.207 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:48.207 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:20:48.207 altname enp218s0f1np1 00:20:48.207 altname ens818f1np1 00:20:48.207 inet 192.168.100.9/24 scope global mlx_0_1 00:20:48.207 valid_lft forever preferred_lft forever 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:48.207 192.168.100.9' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:48.207 192.168.100.9' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:48.207 192.168.100.9' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1523449 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1523449 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@823 -- # '[' -z 1523449 ']' 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@828 -- # local max_retries=100 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@832 -- # xtrace_disable 00:20:48.207 23:46:36 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:48.207 [2024-07-15 23:46:36.916889] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:20:48.207 [2024-07-15 23:46:36.916932] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.207 [2024-07-15 23:46:36.970859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.207 [2024-07-15 23:46:37.050027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.207 [2024-07-15 23:46:37.050066] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.207 [2024-07-15 23:46:37.050073] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.207 [2024-07-15 23:46:37.050079] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.207 [2024-07-15 23:46:37.050084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.207 [2024-07-15 23:46:37.050125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.207 [2024-07-15 23:46:37.050224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.207 [2024-07-15 23:46:37.050253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.207 [2024-07-15 23:46:37.050252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.775 23:46:37 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:20:48.775 23:46:37 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@856 -- # return 0 00:20:48.775 23:46:37 nvmf_rdma.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:48.775 23:46:37 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:48.775 23:46:37 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:49.034 23:46:37 nvmf_rdma.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.034 23:46:37 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:49.034 23:46:37 nvmf_rdma.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:52.320 23:46:40 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:52.320 23:46:40 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:52.320 23:46:40 nvmf_rdma.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:20:52.320 23:46:40 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:52.320 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:52.320 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:20:52.320 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:52.320 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:20:52.320 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:20:52.320 [2024-07-15 23:46:41.298017] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:20:52.579 [2024-07-15 23:46:41.317742] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xcadf10/0xcbbd00) succeed. 00:20:52.579 [2024-07-15 23:46:41.327040] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xcaf550/0xd3bd40) succeed. 00:20:52.579 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:52.838 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:52.838 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.096 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:53.096 23:46:41 nvmf_rdma.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:53.096 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:53.355 [2024-07-15 23:46:42.157160] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:53.355 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:53.613 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:20:53.613 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:20:53.613 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:53.613 23:46:42 nvmf_rdma.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:20:55.107 Initializing NVMe Controllers 00:20:55.107 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:20:55.107 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:20:55.107 Initialization complete. Launching workers. 00:20:55.107 ======================================================== 00:20:55.107 Latency(us) 00:20:55.107 Device Information : IOPS MiB/s Average min max 00:20:55.107 PCIE (0000:5f:00.0) NSID 1 from core 0: 99683.85 389.39 320.75 31.82 7189.88 00:20:55.107 ======================================================== 00:20:55.107 Total : 99683.85 389.39 320.75 31.82 7189.88 00:20:55.107 00:20:55.107 23:46:43 nvmf_rdma.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:58.414 Initializing NVMe Controllers 00:20:58.414 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:58.414 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:58.414 Initialization complete. Launching workers. 00:20:58.414 ======================================================== 00:20:58.414 Latency(us) 00:20:58.414 Device Information : IOPS MiB/s Average min max 00:20:58.414 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6703.99 26.19 148.36 48.04 4082.33 00:20:58.414 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5237.99 20.46 190.70 70.79 4106.02 00:20:58.414 ======================================================== 00:20:58.414 Total : 11941.99 46.65 166.93 48.04 4106.02 00:20:58.414 00:20:58.414 23:46:46 nvmf_rdma.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:01.696 Initializing NVMe Controllers 00:21:01.697 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:01.697 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:01.697 Initialization complete. Launching workers. 00:21:01.697 ======================================================== 00:21:01.697 Latency(us) 00:21:01.697 Device Information : IOPS MiB/s Average min max 00:21:01.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18255.25 71.31 1753.24 493.29 6068.31 00:21:01.697 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4031.83 15.75 7979.05 4771.19 11033.52 00:21:01.697 ======================================================== 00:21:01.697 Total : 22287.09 87.06 2879.52 493.29 11033.52 00:21:01.697 00:21:01.697 23:46:50 nvmf_rdma.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:21:01.697 23:46:50 nvmf_rdma.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:21:05.882 Initializing NVMe Controllers 00:21:05.882 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.882 Controller IO queue size 128, less than required. 00:21:05.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.882 Controller IO queue size 128, less than required. 00:21:05.882 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.882 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.882 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:05.882 Initialization complete. Launching workers. 00:21:05.882 ======================================================== 00:21:05.882 Latency(us) 00:21:05.882 Device Information : IOPS MiB/s Average min max 00:21:05.882 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3918.71 979.68 32675.94 13419.27 73525.63 00:21:05.882 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4068.93 1017.23 31265.46 14067.29 55232.02 00:21:05.882 ======================================================== 00:21:05.882 Total : 7987.65 1996.91 31957.44 13419.27 73525.63 00:21:05.882 00:21:05.882 23:46:54 nvmf_rdma.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:21:06.140 No valid NVMe controllers or AIO or URING devices found 00:21:06.140 Initializing NVMe Controllers 00:21:06.140 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.140 Controller IO queue size 128, less than required. 00:21:06.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.140 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:06.140 Controller IO queue size 128, less than required. 00:21:06.140 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.140 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:06.140 WARNING: Some requested NVMe devices were skipped 00:21:06.140 23:46:55 nvmf_rdma.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:21:11.408 Initializing NVMe Controllers 00:21:11.408 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.408 Controller IO queue size 128, less than required. 00:21:11.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:11.408 Controller IO queue size 128, less than required. 00:21:11.408 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:11.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:11.408 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:11.408 Initialization complete. Launching workers. 00:21:11.408 00:21:11.408 ==================== 00:21:11.408 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:11.408 RDMA transport: 00:21:11.408 dev name: mlx5_0 00:21:11.408 polls: 395800 00:21:11.408 idle_polls: 392313 00:21:11.408 completions: 43650 00:21:11.408 queued_requests: 1 00:21:11.408 total_send_wrs: 21825 00:21:11.408 send_doorbell_updates: 3258 00:21:11.408 total_recv_wrs: 21952 00:21:11.408 recv_doorbell_updates: 3260 00:21:11.408 --------------------------------- 00:21:11.408 00:21:11.408 ==================== 00:21:11.408 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:11.408 RDMA transport: 00:21:11.408 dev name: mlx5_0 00:21:11.408 polls: 403246 00:21:11.408 idle_polls: 402961 00:21:11.408 completions: 20330 00:21:11.408 queued_requests: 1 00:21:11.408 total_send_wrs: 10165 00:21:11.408 send_doorbell_updates: 258 00:21:11.408 total_recv_wrs: 10292 00:21:11.408 recv_doorbell_updates: 259 00:21:11.408 --------------------------------- 00:21:11.408 ======================================================== 00:21:11.408 Latency(us) 00:21:11.408 Device Information : IOPS MiB/s Average min max 00:21:11.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5447.01 1361.75 23544.79 11428.58 54657.23 00:21:11.408 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2536.81 634.20 50616.37 29303.99 72037.78 00:21:11.408 ======================================================== 00:21:11.408 Total : 7983.83 1995.96 32146.63 11428.58 72037.78 00:21:11.408 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:11.408 rmmod nvme_rdma 00:21:11.408 rmmod nvme_fabrics 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1523449 ']' 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1523449 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@942 -- # '[' -z 1523449 ']' 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@946 -- # kill -0 1523449 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@947 -- # uname 00:21:11.408 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1523449 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1523449' 00:21:11.409 killing process with pid 1523449 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@961 -- # kill 1523449 00:21:11.409 23:46:59 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@966 -- # wait 1523449 00:21:13.313 23:47:01 nvmf_rdma.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:13.313 23:47:01 nvmf_rdma.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:13.313 00:21:13.313 real 0m30.315s 00:21:13.313 user 1m41.103s 00:21:13.313 sys 0m4.940s 00:21:13.313 23:47:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:13.313 23:47:01 nvmf_rdma.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:13.313 ************************************ 00:21:13.313 END TEST nvmf_perf 00:21:13.313 ************************************ 00:21:13.313 23:47:01 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:21:13.313 23:47:01 nvmf_rdma -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:13.313 23:47:01 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:13.313 23:47:01 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:13.313 23:47:01 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:13.313 ************************************ 00:21:13.313 START TEST nvmf_fio_host 00:21:13.313 ************************************ 00:21:13.313 23:47:01 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:13.313 * Looking for test storage... 00:21:13.313 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:13.313 23:47:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:13.313 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.313 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.313 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.314 23:47:02 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.584 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:18.585 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:18.585 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:18.585 Found net devices under 0000:da:00.0: mlx_0_0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:18.585 Found net devices under 0000:da:00.1: mlx_0_1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:18.585 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:18.585 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:18.585 altname enp218s0f0np0 00:21:18.585 altname ens818f0np0 00:21:18.585 inet 192.168.100.8/24 scope global mlx_0_0 00:21:18.585 valid_lft forever preferred_lft forever 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:18.585 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:18.585 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:18.585 altname enp218s0f1np1 00:21:18.585 altname ens818f1np1 00:21:18.585 inet 192.168.100.9/24 scope global mlx_0_1 00:21:18.585 valid_lft forever preferred_lft forever 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:18.585 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:18.586 192.168.100.9' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:18.586 192.168.100.9' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:18.586 192.168.100.9' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1530486 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1530486 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@823 -- # '[' -z 1530486 ']' 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:18.586 23:47:06 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.586 [2024-07-15 23:47:06.960325] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:18.586 [2024-07-15 23:47:06.960371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.586 [2024-07-15 23:47:07.016168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:18.586 [2024-07-15 23:47:07.095982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.586 [2024-07-15 23:47:07.096017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.586 [2024-07-15 23:47:07.096024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.586 [2024-07-15 23:47:07.096031] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.586 [2024-07-15 23:47:07.096036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.586 [2024-07-15 23:47:07.096078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.586 [2024-07-15 23:47:07.096173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.586 [2024-07-15 23:47:07.096199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:18.586 [2024-07-15 23:47:07.096200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.898 23:47:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:18.898 23:47:07 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@856 -- # return 0 00:21:18.898 23:47:07 nvmf_rdma.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:19.156 [2024-07-15 23:47:07.932446] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x202dcc0/0x20321b0) succeed. 00:21:19.156 [2024-07-15 23:47:07.941657] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x202f300/0x2073840) succeed. 00:21:19.156 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:19.156 23:47:08 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:19.156 23:47:08 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:19.156 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:19.413 Malloc1 00:21:19.413 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:19.671 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:19.928 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:19.929 [2024-07-15 23:47:08.850735] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:19.929 23:47:08 nvmf_rdma.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:20.187 23:47:09 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:20.445 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:20.445 fio-3.35 00:21:20.445 Starting 1 thread 00:21:22.981 00:21:22.981 test: (groupid=0, jobs=1): err= 0: pid=1531059: Mon Jul 15 23:47:11 2024 00:21:22.981 read: IOPS=17.4k, BW=68.1MiB/s (71.4MB/s)(137MiB/2004msec) 00:21:22.981 slat (nsec): min=1405, max=36611, avg=1560.93, stdev=503.55 00:21:22.981 clat (usec): min=2315, max=6656, avg=3642.15, stdev=127.36 00:21:22.981 lat (usec): min=2332, max=6657, avg=3643.71, stdev=127.34 00:21:22.981 clat percentiles (usec): 00:21:22.981 | 1.00th=[ 3294], 5.00th=[ 3589], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:22.981 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:21:22.981 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3687], 00:21:22.981 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 5735], 99.95th=[ 5800], 00:21:22.981 | 99.99th=[ 6652] 00:21:22.981 bw ( KiB/s): min=68008, max=70576, per=100.00%, avg=69750.00, stdev=1186.44, samples=4 00:21:22.981 iops : min=17002, max=17644, avg=17437.50, stdev=296.61, samples=4 00:21:22.981 write: IOPS=17.5k, BW=68.2MiB/s (71.5MB/s)(137MiB/2004msec); 0 zone resets 00:21:22.981 slat (nsec): min=1455, max=24308, avg=1667.77, stdev=525.35 00:21:22.981 clat (usec): min=2330, max=6650, avg=3641.98, stdev=133.17 00:21:22.981 lat (usec): min=2341, max=6651, avg=3643.64, stdev=133.15 00:21:22.981 clat percentiles (usec): 00:21:22.981 | 1.00th=[ 3294], 5.00th=[ 3589], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:22.981 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3621], 60.00th=[ 3654], 00:21:22.981 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3687], 00:21:22.981 | 99.00th=[ 3982], 99.50th=[ 4015], 99.90th=[ 5735], 99.95th=[ 5800], 00:21:22.981 | 99.99th=[ 6587] 00:21:22.981 bw ( KiB/s): min=68184, max=70448, per=100.00%, avg=69838.00, stdev=1104.68, samples=4 00:21:22.981 iops : min=17046, max=17612, avg=17459.50, stdev=276.17, samples=4 00:21:22.981 lat (msec) : 4=99.45%, 10=0.55% 00:21:22.981 cpu : usr=99.55%, sys=0.05%, ctx=19, majf=0, minf=4 00:21:22.981 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:22.981 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.981 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.981 issued rwts: total=34944,34977,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.981 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.981 00:21:22.981 Run status group 0 (all jobs): 00:21:22.981 READ: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=137MiB (143MB), run=2004-2004msec 00:21:22.981 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=137MiB (143MB), run=2004-2004msec 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1331 -- # local fio_dir=/usr/src/fio 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local sanitizers 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1335 -- # shift 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local asan_lib= 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libasan 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1338 -- # for sanitizer in "${sanitizers[@]}" 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # grep libclang_rt.asan 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # awk '{print $3}' 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1339 -- # asan_lib= 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1340 -- # [[ -n '' ]] 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:22.981 23:47:11 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1346 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:23.239 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:23.239 fio-3.35 00:21:23.239 Starting 1 thread 00:21:25.793 00:21:25.793 test: (groupid=0, jobs=1): err= 0: pid=1531629: Mon Jul 15 23:47:14 2024 00:21:25.793 read: IOPS=14.2k, BW=222MiB/s (233MB/s)(437MiB/1970msec) 00:21:25.793 slat (nsec): min=2313, max=45839, avg=2726.45, stdev=1219.17 00:21:25.793 clat (usec): min=490, max=9708, avg=1671.83, stdev=1371.05 00:21:25.793 lat (usec): min=493, max=9712, avg=1674.56, stdev=1371.56 00:21:25.793 clat percentiles (usec): 00:21:25.793 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 947], 00:21:25.793 | 30.00th=[ 1012], 40.00th=[ 1106], 50.00th=[ 1205], 60.00th=[ 1319], 00:21:25.793 | 70.00th=[ 1450], 80.00th=[ 1631], 90.00th=[ 4293], 95.00th=[ 5145], 00:21:25.793 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 8848], 99.95th=[ 9241], 00:21:25.793 | 99.99th=[ 9634] 00:21:25.793 bw ( KiB/s): min=108614, max=112672, per=48.59%, avg=110473.50, stdev=1675.54, samples=4 00:21:25.793 iops : min= 6788, max= 7042, avg=6904.50, stdev=104.86, samples=4 00:21:25.793 write: IOPS=8050, BW=126MiB/s (132MB/s)(225MiB/1792msec); 0 zone resets 00:21:25.793 slat (usec): min=27, max=117, avg=30.13, stdev= 6.91 00:21:25.793 clat (usec): min=4253, max=19946, avg=12952.01, stdev=1867.59 00:21:25.793 lat (usec): min=4283, max=19974, avg=12982.14, stdev=1866.69 00:21:25.793 clat percentiles (usec): 00:21:25.793 | 1.00th=[ 6456], 5.00th=[10159], 10.00th=[10814], 20.00th=[11600], 00:21:25.793 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:21:25.793 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15270], 95.00th=[15926], 00:21:25.793 | 99.00th=[17433], 99.50th=[17957], 99.90th=[19530], 99.95th=[19530], 00:21:25.793 | 99.99th=[20055] 00:21:25.793 bw ( KiB/s): min=110626, max=116736, per=88.85%, avg=114448.50, stdev=2654.40, samples=4 00:21:25.793 iops : min= 6914, max= 7296, avg=7153.00, stdev=165.96, samples=4 00:21:25.793 lat (usec) : 500=0.01%, 750=1.56%, 1000=16.87% 00:21:25.793 lat (msec) : 2=38.85%, 4=1.77%, 10=8.21%, 20=32.74% 00:21:25.793 cpu : usr=97.46%, sys=0.90%, ctx=184, majf=0, minf=3 00:21:25.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:25.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:25.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:25.794 issued rwts: total=27996,14427,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:25.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:25.794 00:21:25.794 Run status group 0 (all jobs): 00:21:25.794 READ: bw=222MiB/s (233MB/s), 222MiB/s-222MiB/s (233MB/s-233MB/s), io=437MiB (459MB), run=1970-1970msec 00:21:25.794 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=225MiB (236MB), run=1792-1792msec 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:25.794 rmmod nvme_rdma 00:21:25.794 rmmod nvme_fabrics 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1530486 ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1530486 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@942 -- # '[' -z 1530486 ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@946 -- # kill -0 1530486 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@947 -- # uname 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1530486 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1530486' 00:21:25.794 killing process with pid 1530486 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@961 -- # kill 1530486 00:21:25.794 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@966 -- # wait 1530486 00:21:26.052 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:26.053 23:47:14 nvmf_rdma.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:26.053 00:21:26.053 real 0m13.040s 00:21:26.053 user 0m49.418s 00:21:26.053 sys 0m4.569s 00:21:26.053 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:21:26.053 23:47:14 nvmf_rdma.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:26.053 ************************************ 00:21:26.053 END TEST nvmf_fio_host 00:21:26.053 ************************************ 00:21:26.053 23:47:15 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:21:26.053 23:47:15 nvmf_rdma -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:26.053 23:47:15 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:21:26.053 23:47:15 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:21:26.053 23:47:15 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:21:26.311 ************************************ 00:21:26.311 START TEST nvmf_failover 00:21:26.311 ************************************ 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:26.312 * Looking for test storage... 00:21:26.312 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:26.312 23:47:15 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:31.580 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:31.580 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:31.580 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:31.581 Found net devices under 0000:da:00.0: mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:31.581 Found net devices under 0000:da:00.1: mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:31.581 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.581 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:21:31.581 altname enp218s0f0np0 00:21:31.581 altname ens818f0np0 00:21:31.581 inet 192.168.100.8/24 scope global mlx_0_0 00:21:31.581 valid_lft forever preferred_lft forever 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:31.581 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:31.581 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:21:31.581 altname enp218s0f1np1 00:21:31.581 altname ens818f1np1 00:21:31.581 inet 192.168.100.9/24 scope global mlx_0_1 00:21:31.581 valid_lft forever preferred_lft forever 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:31.581 192.168.100.9' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:31.581 192.168.100.9' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:31.581 192.168.100.9' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1535138 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1535138 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1535138 ']' 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:31.581 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.582 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:31.582 23:47:20 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:31.582 [2024-07-15 23:47:20.505762] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:31.582 [2024-07-15 23:47:20.505806] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.840 [2024-07-15 23:47:20.561091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:31.840 [2024-07-15 23:47:20.640182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.840 [2024-07-15 23:47:20.640216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.840 [2024-07-15 23:47:20.640223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.840 [2024-07-15 23:47:20.640229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.840 [2024-07-15 23:47:20.640234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.840 [2024-07-15 23:47:20.640328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.840 [2024-07-15 23:47:20.640414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.840 [2024-07-15 23:47:20.640415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.406 23:47:21 nvmf_rdma.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:32.664 [2024-07-15 23:47:21.532432] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1150200/0x11546f0) succeed. 00:21:32.664 [2024-07-15 23:47:21.541481] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x11517a0/0x1195d80) succeed. 00:21:32.922 23:47:21 nvmf_rdma.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:32.922 Malloc0 00:21:32.922 23:47:21 nvmf_rdma.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:33.180 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:33.437 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:33.437 [2024-07-15 23:47:22.340976] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:33.437 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:33.695 [2024-07-15 23:47:22.513305] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:33.695 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:33.953 [2024-07-15 23:47:22.706011] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1535553 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1535553 /var/tmp/bdevperf.sock 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1535553 ']' 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:33.953 23:47:22 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:34.885 23:47:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:34.885 23:47:23 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:34.885 23:47:23 nvmf_rdma.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.885 NVMe0n1 00:21:34.885 23:47:23 nvmf_rdma.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.142 00:21:35.142 23:47:24 nvmf_rdma.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1535755 00:21:35.143 23:47:24 nvmf_rdma.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.143 23:47:24 nvmf_rdma.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:36.515 23:47:25 nvmf_rdma.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:36.515 23:47:25 nvmf_rdma.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:39.793 23:47:28 nvmf_rdma.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:39.793 00:21:39.793 23:47:28 nvmf_rdma.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:39.793 23:47:28 nvmf_rdma.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:43.066 23:47:31 nvmf_rdma.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:43.066 [2024-07-15 23:47:31.883117] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:43.066 23:47:31 nvmf_rdma.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:44.003 23:47:32 nvmf_rdma.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:44.261 23:47:33 nvmf_rdma.nvmf_failover -- host/failover.sh@59 -- # wait 1535755 00:21:50.825 0 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@61 -- # killprocess 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1535553 ']' 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1535553' 00:21:50.825 killing process with pid 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1535553 00:21:50.825 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.825 [2024-07-15 23:47:22.777566] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:50.825 [2024-07-15 23:47:22.777621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535553 ] 00:21:50.825 [2024-07-15 23:47:22.836271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.825 [2024-07-15 23:47:22.911285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.825 Running I/O for 15 seconds... 00:21:50.825 [2024-07-15 23:47:26.245789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.245986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.245993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.825 [2024-07-15 23:47:26.246178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183e00 00:21:50.825 [2024-07-15 23:47:26.246185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.826 [2024-07-15 23:47:26.246757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183e00 00:21:50.826 [2024-07-15 23:47:26.246763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:21112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.246987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.246997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:21176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.827 [2024-07-15 23:47:26.247355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183e00 00:21:50.827 [2024-07-15 23:47:26.247361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.247530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.247536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:26.255800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.828 [2024-07-15 23:47:26.255815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.255823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:21512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.828 [2024-07-15 23:47:26.255830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.257748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.828 [2024-07-15 23:47:26.257760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.828 [2024-07-15 23:47:26.257766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:21:50.828 [2024-07-15 23:47:26.257773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.257809] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:21:50.828 [2024-07-15 23:47:26.257817] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:50.828 [2024-07-15 23:47:26.257825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.828 [2024-07-15 23:47:26.257858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.828 [2024-07-15 23:47:26.257867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.257875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.828 [2024-07-15 23:47:26.257882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.257889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.828 [2024-07-15 23:47:26.257895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.257902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.828 [2024-07-15 23:47:26.257909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:26.275264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:50.828 [2024-07-15 23:47:26.275279] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:50.828 [2024-07-15 23:47:26.275285] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:50.828 [2024-07-15 23:47:26.278081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.828 [2024-07-15 23:47:26.327964] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.828 [2024-07-15 23:47:29.705210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:29.705248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:29.705271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:29.705287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183e00 00:21:50.828 [2024-07-15 23:47:29.705301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.828 [2024-07-15 23:47:29.705315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.828 [2024-07-15 23:47:29.705329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.828 [2024-07-15 23:47:29.705337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.828 [2024-07-15 23:47:29.705343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.829 [2024-07-15 23:47:29.705655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:109992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.829 [2024-07-15 23:47:29.705803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183e00 00:21:50.829 [2024-07-15 23:47:29.705810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.705824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.705838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.705851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.705879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.705986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.705992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:110112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.830 [2024-07-15 23:47:29.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183e00 00:21:50.830 [2024-07-15 23:47:29.706393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.830 [2024-07-15 23:47:29.706402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:110232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:110264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183e00 00:21:50.831 [2024-07-15 23:47:29.706935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.706988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.831 [2024-07-15 23:47:29.706994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.831 [2024-07-15 23:47:29.707002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:110840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.707016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.707030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:110856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.707044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.707058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.707072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.832 [2024-07-15 23:47:29.707080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.708824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.832 [2024-07-15 23:47:29.708842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.832 [2024-07-15 23:47:29.708850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:110888 len:8 PRP1 0x0 PRP2 0x0 00:21:50.832 [2024-07-15 23:47:29.708857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:29.708896] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:21:50.832 [2024-07-15 23:47:29.708904] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:21:50.832 [2024-07-15 23:47:29.708912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.832 [2024-07-15 23:47:29.711717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.832 [2024-07-15 23:47:29.726162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:50.832 [2024-07-15 23:47:29.774059] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.832 [2024-07-15 23:47:34.080358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:74504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:74544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183e00 00:21:50.832 [2024-07-15 23:47:34.080835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.832 [2024-07-15 23:47:34.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.080849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.080863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.080948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.080962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.080976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.080990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.080998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.833 [2024-07-15 23:47:34.081270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183e00 00:21:50.833 [2024-07-15 23:47:34.081326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.833 [2024-07-15 23:47:34.081334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.834 [2024-07-15 23:47:34.081638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:74992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:75000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:75008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:75040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:75096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x183e00 00:21:50.834 [2024-07-15 23:47:34.081873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.834 [2024-07-15 23:47:34.081887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.834 [2024-07-15 23:47:34.081903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.834 [2024-07-15 23:47:34.081917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.834 [2024-07-15 23:47:34.081925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.081931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.081939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.081946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.081954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.081960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.081968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.081975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.081982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.081989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.081998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:75376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:75384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:75392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:75400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:75408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:75456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:75472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:75480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:75488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:75496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:50.835 [2024-07-15 23:47:34.082223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.082231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183e00 00:21:50.835 [2024-07-15 23:47:34.082237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:2dd19000 sqhd:52b0 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.084121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:50.835 [2024-07-15 23:47:34.084133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:50.835 [2024-07-15 23:47:34.084142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:21:50.835 [2024-07-15 23:47:34.084148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.835 [2024-07-15 23:47:34.084186] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:21:50.835 [2024-07-15 23:47:34.084194] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:21:50.835 [2024-07-15 23:47:34.084202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.835 [2024-07-15 23:47:34.087007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.835 [2024-07-15 23:47:34.101282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:50.835 [2024-07-15 23:47:34.146603] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.835 00:21:50.835 Latency(us) 00:21:50.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.835 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:50.835 Verification LBA range: start 0x0 length 0x4000 00:21:50.835 NVMe0n1 : 15.01 14121.30 55.16 343.68 0.00 8825.93 351.09 1038589.56 00:21:50.835 =================================================================================================================== 00:21:50.835 Total : 14121.30 55.16 343.68 0.00 8825.93 351.09 1038589.56 00:21:50.835 Received shutdown signal, test time was about 15.000000 seconds 00:21:50.835 00:21:50.835 Latency(us) 00:21:50.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.835 =================================================================================================================== 00:21:50.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1538198 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1538198 /var/tmp/bdevperf.sock 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@823 -- # '[' -z 1538198 ']' 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@828 -- # local max_retries=100 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@832 -- # xtrace_disable 00:21:50.835 23:47:39 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:51.400 23:47:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:21:51.400 23:47:40 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@856 -- # return 0 00:21:51.400 23:47:40 nvmf_rdma.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:51.707 [2024-07-15 23:47:40.509020] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:51.707 23:47:40 nvmf_rdma.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:51.991 [2024-07-15 23:47:40.681585] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:51.991 23:47:40 nvmf_rdma.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.991 NVMe0n1 00:21:52.252 23:47:40 nvmf_rdma.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.252 00:21:52.252 23:47:41 nvmf_rdma.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:52.509 00:21:52.509 23:47:41 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.509 23:47:41 nvmf_rdma.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:52.766 23:47:41 nvmf_rdma.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.022 23:47:41 nvmf_rdma.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:56.299 23:47:44 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.299 23:47:44 nvmf_rdma.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:56.299 23:47:44 nvmf_rdma.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:56.299 23:47:44 nvmf_rdma.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1539095 00:21:56.299 23:47:44 nvmf_rdma.nvmf_failover -- host/failover.sh@92 -- # wait 1539095 00:21:57.231 0 00:21:57.231 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:57.231 [2024-07-15 23:47:39.536684] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:21:57.231 [2024-07-15 23:47:39.536741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538198 ] 00:21:57.231 [2024-07-15 23:47:39.594386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.231 [2024-07-15 23:47:39.663622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.231 [2024-07-15 23:47:41.773236] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:57.231 [2024-07-15 23:47:41.773872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:57.231 [2024-07-15 23:47:41.773903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:57.231 [2024-07-15 23:47:41.796990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:57.231 [2024-07-15 23:47:41.812896] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:57.231 Running I/O for 1 seconds... 00:21:57.231 00:21:57.231 Latency(us) 00:21:57.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.231 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:57.231 Verification LBA range: start 0x0 length 0x4000 00:21:57.231 NVMe0n1 : 1.01 17698.34 69.13 0.00 0.00 7192.49 2574.63 15291.73 00:21:57.231 =================================================================================================================== 00:21:57.231 Total : 17698.34 69.13 0.00 0.00 7192.49 2574.63 15291.73 00:21:57.231 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.231 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:57.488 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.488 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.488 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:57.746 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:58.002 23:47:46 nvmf_rdma.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:01.280 23:47:49 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.280 23:47:49 nvmf_rdma.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@108 -- # killprocess 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1538198 ']' 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1538198' 00:22:01.280 killing process with pid 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1538198 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:01.280 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:01.537 rmmod nvme_rdma 00:22:01.537 rmmod nvme_fabrics 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1535138 ']' 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1535138 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@942 -- # '[' -z 1535138 ']' 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@946 -- # kill -0 1535138 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # uname 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:01.537 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1535138 00:22:01.795 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:22:01.795 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:22:01.795 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1535138' 00:22:01.795 killing process with pid 1535138 00:22:01.795 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@961 -- # kill 1535138 00:22:01.795 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@966 -- # wait 1535138 00:22:02.052 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:02.052 23:47:50 nvmf_rdma.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:02.052 00:22:02.052 real 0m35.761s 00:22:02.052 user 2m3.392s 00:22:02.052 sys 0m5.913s 00:22:02.052 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:02.052 23:47:50 nvmf_rdma.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:02.052 ************************************ 00:22:02.052 END TEST nvmf_failover 00:22:02.052 ************************************ 00:22:02.052 23:47:50 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:22:02.052 23:47:50 nvmf_rdma -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:02.052 23:47:50 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:02.052 23:47:50 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:02.052 23:47:50 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:02.052 ************************************ 00:22:02.052 START TEST nvmf_host_discovery 00:22:02.052 ************************************ 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:22:02.052 * Looking for test storage... 00:22:02.052 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.052 23:47:50 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:02.053 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:22:02.053 00:22:02.053 real 0m0.113s 00:22:02.053 user 0m0.054s 00:22:02.053 sys 0m0.067s 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:02.053 23:47:50 nvmf_rdma.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.053 ************************************ 00:22:02.053 END TEST nvmf_host_discovery 00:22:02.053 ************************************ 00:22:02.053 23:47:51 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:22:02.053 23:47:51 nvmf_rdma -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:02.053 23:47:51 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:02.053 23:47:51 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:02.053 23:47:51 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:02.310 ************************************ 00:22:02.310 START TEST nvmf_host_multipath_status 00:22:02.310 ************************************ 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:22:02.310 * Looking for test storage... 00:22:02.310 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.310 23:47:51 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:07.572 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:07.573 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:07.573 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:07.573 Found net devices under 0000:da:00.0: mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:07.573 Found net devices under 0000:da:00.1: mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:07.573 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.573 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:07.573 altname enp218s0f0np0 00:22:07.573 altname ens818f0np0 00:22:07.573 inet 192.168.100.8/24 scope global mlx_0_0 00:22:07.573 valid_lft forever preferred_lft forever 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:07.573 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:07.573 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:07.573 altname enp218s0f1np1 00:22:07.573 altname ens818f1np1 00:22:07.573 inet 192.168.100.9/24 scope global mlx_0_1 00:22:07.573 valid_lft forever preferred_lft forever 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:07.573 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:07.574 192.168.100.9' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:07.574 192.168.100.9' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:07.574 192.168.100.9' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1543157 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1543157 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1543157 ']' 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:07.574 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:07.832 [2024-07-15 23:47:56.571910] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:07.832 [2024-07-15 23:47:56.571955] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.832 [2024-07-15 23:47:56.622474] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:07.832 [2024-07-15 23:47:56.695596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.832 [2024-07-15 23:47:56.695636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.832 [2024-07-15 23:47:56.695644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.832 [2024-07-15 23:47:56.695650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.832 [2024-07-15 23:47:56.695655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.832 [2024-07-15 23:47:56.695693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.832 [2024-07-15 23:47:56.695697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.832 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:07.832 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:22:07.832 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.832 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.832 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:08.089 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.089 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1543157 00:22:08.089 23:47:56 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:08.089 [2024-07-15 23:47:56.998399] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5db3c0/0x5df8b0) succeed. 00:22:08.089 [2024-07-15 23:47:57.007318] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5dc870/0x620f40) succeed. 00:22:08.345 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:08.345 Malloc0 00:22:08.345 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:08.600 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.856 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:08.856 [2024-07-15 23:47:57.753150] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:08.856 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:09.113 [2024-07-15 23:47:57.917444] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1543408 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1543408 /var/tmp/bdevperf.sock 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@823 -- # '[' -z 1543408 ']' 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:09.113 23:47:57 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:10.047 23:47:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:10.047 23:47:58 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # return 0 00:22:10.047 23:47:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:10.047 23:47:58 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:10.305 Nvme0n1 00:22:10.305 23:47:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:10.562 Nvme0n1 00:22:10.562 23:47:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:10.562 23:47:59 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.084 23:48:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:13.084 23:48:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:13.084 23:48:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:13.084 23:48:01 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.015 23:48:02 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.273 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.273 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.273 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.273 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.529 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.529 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.529 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.529 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.786 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.044 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.044 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:15.044 23:48:03 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:15.301 23:48:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:15.301 23:48:04 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.674 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:16.931 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.931 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:16.931 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.931 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.188 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.188 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:17.188 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.188 23:48:05 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.188 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.188 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:17.188 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.188 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.445 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.445 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:17.445 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:17.701 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:17.701 23:48:06 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:18.655 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:18.655 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:18.655 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.655 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:18.913 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.913 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:18.913 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.913 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.170 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.170 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.170 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.170 23:48:07 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.428 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:19.686 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.686 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:19.686 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.686 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.943 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.943 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:19.943 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:19.943 23:48:08 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:20.201 23:48:09 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:21.133 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:21.133 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:21.133 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.133 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.390 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.390 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:21.390 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.390 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.648 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.906 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.906 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:21.906 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.906 23:48:10 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.163 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.163 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:22.163 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.163 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.420 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.420 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:22.420 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:22.420 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:22.677 23:48:11 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:23.609 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:23.609 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.609 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.609 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.866 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.866 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:23.866 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.866 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.124 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.124 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.124 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.124 23:48:12 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.124 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.124 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.124 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.124 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.391 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.391 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:24.391 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.391 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:24.656 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:24.913 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:24.913 23:48:13 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:25.939 23:48:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:25.940 23:48:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:25.940 23:48:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.940 23:48:14 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.196 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.196 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:26.196 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.196 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.454 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.711 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.711 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:26.711 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.711 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.968 23:48:15 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:27.225 23:48:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:27.225 23:48:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:27.481 23:48:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:27.739 23:48:16 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:28.672 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:28.672 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.672 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.672 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.949 23:48:17 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:29.207 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.207 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:29.207 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.207 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.465 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.723 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.723 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:29.723 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:29.981 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:29.981 23:48:18 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:31.356 23:48:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:31.356 23:48:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:31.356 23:48:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.356 23:48:19 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.356 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.614 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.872 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.872 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.872 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.872 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.131 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.131 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:32.131 23:48:20 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:32.390 23:48:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:32.390 23:48:21 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:33.326 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:33.326 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:33.326 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.326 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.584 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.584 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.584 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.584 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.842 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:34.100 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.100 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:34.100 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.100 23:48:22 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:34.358 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:34.616 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:34.874 23:48:23 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:35.808 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:35.808 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:35.808 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.808 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.065 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.065 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:36.065 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.065 23:48:24 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.323 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.581 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.581 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:36.581 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:36.581 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1543408 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1543408 ']' 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1543408 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1543408 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_2 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_2 = sudo ']' 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1543408' 00:22:36.839 killing process with pid 1543408 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1543408 00:22:36.839 23:48:25 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1543408 00:22:37.103 Connection closed with partial response: 00:22:37.103 00:22:37.103 00:22:37.103 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1543408 00:22:37.103 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:37.103 [2024-07-15 23:47:57.979965] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:22:37.103 [2024-07-15 23:47:57.980022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1543408 ] 00:22:37.103 [2024-07-15 23:47:58.032941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.103 [2024-07-15 23:47:58.106275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.103 Running I/O for 90 seconds... 00:22:37.103 [2024-07-15 23:48:11.343726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.103 [2024-07-15 23:48:11.343764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.103 [2024-07-15 23:48:11.343782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.343882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.343987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.343993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.344009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.344023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.344039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:22:37.104 [2024-07-15 23:48:11.344334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:37.104 [2024-07-15 23:48:11.344524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.104 [2024-07-15 23:48:11.344530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.344990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.344998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.105 [2024-07-15 23:48:11.345156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:37.105 [2024-07-15 23:48:11.345164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.106 [2024-07-15 23:48:11.345773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:22:37.106 [2024-07-15 23:48:11.345789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:22:37.106 [2024-07-15 23:48:11.345804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:37.106 [2024-07-15 23:48:11.345813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:22:37.106 [2024-07-15 23:48:11.345819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.345835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.345850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.345974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.345990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.345999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.346804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.346822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.346838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.346853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.346868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.346877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.346883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:24016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x182b00 00:22:37.107 [2024-07-15 23:48:11.347330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:24056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.347370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.347376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.357162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.357172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.357182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.357189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:37.107 [2024-07-15 23:48:11.357198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.107 [2024-07-15 23:48:11.357204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:24176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:24256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:24320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:24336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:37.108 [2024-07-15 23:48:11.357783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.108 [2024-07-15 23:48:11.357789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:24424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:24472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.357992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.357998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:24584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:24600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:24616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:24672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:24712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.109 [2024-07-15 23:48:11.358401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:37.109 [2024-07-15 23:48:11.358410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:24736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:23864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:23912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:11.358876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:23960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.358932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.358939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:11.359209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:11.359218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:23.684214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:93736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:23.684267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:23.684285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x182b00 00:22:37.110 [2024-07-15 23:48:23.684301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.110 [2024-07-15 23:48:23.684317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:37.110 [2024-07-15 23:48:23.684326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.684855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.684875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.684890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.684906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:93384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:93432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.684982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.684991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.684997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:93296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:93920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:93552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:93584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:93648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x182b00 00:22:37.111 [2024-07-15 23:48:23.685569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.111 [2024-07-15 23:48:23.685586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:37.111 [2024-07-15 23:48:23.685595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:93512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:93576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.112 [2024-07-15 23:48:23.685846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:37.112 [2024-07-15 23:48:23.685870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x182b00 00:22:37.112 [2024-07-15 23:48:23.685876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.112 Received shutdown signal, test time was about 26.205805 seconds 00:22:37.112 00:22:37.112 Latency(us) 00:22:37.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.112 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.112 Verification LBA range: start 0x0 length 0x4000 00:22:37.112 Nvme0n1 : 26.21 15684.16 61.27 0.00 0.00 8138.37 57.05 3035877.18 00:22:37.112 =================================================================================================================== 00:22:37.112 Total : 15684.16 61.27 0.00 0.00 8138.37 57.05 3035877.18 00:22:37.112 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:37.371 rmmod nvme_rdma 00:22:37.371 rmmod nvme_fabrics 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1543157 ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1543157 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@942 -- # '[' -z 1543157 ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # kill -0 1543157 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # uname 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1543157 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1543157' 00:22:37.371 killing process with pid 1543157 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@961 -- # kill 1543157 00:22:37.371 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # wait 1543157 00:22:37.629 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.629 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:37.629 00:22:37.629 real 0m35.505s 00:22:37.629 user 1m44.110s 00:22:37.629 sys 0m7.278s 00:22:37.629 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:37.629 23:48:26 nvmf_rdma.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:37.629 ************************************ 00:22:37.629 END TEST nvmf_host_multipath_status 00:22:37.629 ************************************ 00:22:37.629 23:48:26 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:22:37.629 23:48:26 nvmf_rdma -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:37.629 23:48:26 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:37.629 23:48:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:37.629 23:48:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:37.887 ************************************ 00:22:37.887 START TEST nvmf_discovery_remove_ifc 00:22:37.887 ************************************ 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:37.887 * Looking for test storage... 00:22:37.887 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.887 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:37.888 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:37.888 00:22:37.888 real 0m0.112s 00:22:37.888 user 0m0.059s 00:22:37.888 sys 0m0.061s 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:37.888 23:48:26 nvmf_rdma.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:37.888 ************************************ 00:22:37.888 END TEST nvmf_discovery_remove_ifc 00:22:37.888 ************************************ 00:22:37.888 23:48:26 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:22:37.888 23:48:26 nvmf_rdma -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:37.888 23:48:26 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:37.888 23:48:26 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:37.888 23:48:26 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:37.888 ************************************ 00:22:37.888 START TEST nvmf_identify_kernel_target 00:22:37.888 ************************************ 00:22:37.888 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:37.888 * Looking for test storage... 00:22:37.888 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:37.888 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.888 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.146 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.147 23:48:26 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.408 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:43.409 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:43.409 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:43.409 Found net devices under 0000:da:00.0: mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:43.409 Found net devices under 0000:da:00.1: mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:43.409 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:43.409 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:43.409 altname enp218s0f0np0 00:22:43.409 altname ens818f0np0 00:22:43.409 inet 192.168.100.8/24 scope global mlx_0_0 00:22:43.409 valid_lft forever preferred_lft forever 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:43.409 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:43.409 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:43.409 altname enp218s0f1np1 00:22:43.409 altname ens818f1np1 00:22:43.409 inet 192.168.100.9/24 scope global mlx_0_1 00:22:43.409 valid_lft forever preferred_lft forever 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:43.409 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:43.410 192.168.100.9' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:43.410 192.168.100.9' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:43.410 192.168.100.9' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:43.410 23:48:31 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:45.306 Waiting for block devices as requested 00:22:45.306 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:22:45.564 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:45.564 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:45.564 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:45.821 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:45.821 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:45.821 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:45.821 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:46.078 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:46.078 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:46.078 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:46.336 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:46.336 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:46.336 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:46.336 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:46.599 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:46.599 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:46.599 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:46.860 No valid GPT data, bailing 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:46.860 00:22:46.860 Discovery Log Number of Records 2, Generation counter 2 00:22:46.860 =====Discovery Log Entry 0====== 00:22:46.860 trtype: rdma 00:22:46.860 adrfam: ipv4 00:22:46.860 subtype: current discovery subsystem 00:22:46.860 treq: not specified, sq flow control disable supported 00:22:46.860 portid: 1 00:22:46.860 trsvcid: 4420 00:22:46.860 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:46.860 traddr: 192.168.100.8 00:22:46.860 eflags: none 00:22:46.860 rdma_prtype: not specified 00:22:46.860 rdma_qptype: connected 00:22:46.860 rdma_cms: rdma-cm 00:22:46.860 rdma_pkey: 0x0000 00:22:46.860 =====Discovery Log Entry 1====== 00:22:46.860 trtype: rdma 00:22:46.860 adrfam: ipv4 00:22:46.860 subtype: nvme subsystem 00:22:46.860 treq: not specified, sq flow control disable supported 00:22:46.860 portid: 1 00:22:46.860 trsvcid: 4420 00:22:46.860 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:46.860 traddr: 192.168.100.8 00:22:46.860 eflags: none 00:22:46.860 rdma_prtype: not specified 00:22:46.860 rdma_qptype: connected 00:22:46.860 rdma_cms: rdma-cm 00:22:46.860 rdma_pkey: 0x0000 00:22:46.860 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:46.860 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:47.118 ===================================================== 00:22:47.118 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:47.118 ===================================================== 00:22:47.118 Controller Capabilities/Features 00:22:47.118 ================================ 00:22:47.118 Vendor ID: 0000 00:22:47.118 Subsystem Vendor ID: 0000 00:22:47.118 Serial Number: 3d22f1ff98aa495f2a62 00:22:47.118 Model Number: Linux 00:22:47.118 Firmware Version: 6.7.0-68 00:22:47.118 Recommended Arb Burst: 0 00:22:47.118 IEEE OUI Identifier: 00 00 00 00:22:47.118 Multi-path I/O 00:22:47.118 May have multiple subsystem ports: No 00:22:47.118 May have multiple controllers: No 00:22:47.118 Associated with SR-IOV VF: No 00:22:47.118 Max Data Transfer Size: Unlimited 00:22:47.118 Max Number of Namespaces: 0 00:22:47.118 Max Number of I/O Queues: 1024 00:22:47.118 NVMe Specification Version (VS): 1.3 00:22:47.118 NVMe Specification Version (Identify): 1.3 00:22:47.118 Maximum Queue Entries: 128 00:22:47.118 Contiguous Queues Required: No 00:22:47.118 Arbitration Mechanisms Supported 00:22:47.118 Weighted Round Robin: Not Supported 00:22:47.118 Vendor Specific: Not Supported 00:22:47.118 Reset Timeout: 7500 ms 00:22:47.118 Doorbell Stride: 4 bytes 00:22:47.118 NVM Subsystem Reset: Not Supported 00:22:47.118 Command Sets Supported 00:22:47.118 NVM Command Set: Supported 00:22:47.118 Boot Partition: Not Supported 00:22:47.118 Memory Page Size Minimum: 4096 bytes 00:22:47.118 Memory Page Size Maximum: 4096 bytes 00:22:47.118 Persistent Memory Region: Not Supported 00:22:47.118 Optional Asynchronous Events Supported 00:22:47.118 Namespace Attribute Notices: Not Supported 00:22:47.118 Firmware Activation Notices: Not Supported 00:22:47.118 ANA Change Notices: Not Supported 00:22:47.118 PLE Aggregate Log Change Notices: Not Supported 00:22:47.118 LBA Status Info Alert Notices: Not Supported 00:22:47.118 EGE Aggregate Log Change Notices: Not Supported 00:22:47.118 Normal NVM Subsystem Shutdown event: Not Supported 00:22:47.118 Zone Descriptor Change Notices: Not Supported 00:22:47.118 Discovery Log Change Notices: Supported 00:22:47.118 Controller Attributes 00:22:47.118 128-bit Host Identifier: Not Supported 00:22:47.118 Non-Operational Permissive Mode: Not Supported 00:22:47.118 NVM Sets: Not Supported 00:22:47.118 Read Recovery Levels: Not Supported 00:22:47.118 Endurance Groups: Not Supported 00:22:47.118 Predictable Latency Mode: Not Supported 00:22:47.119 Traffic Based Keep ALive: Not Supported 00:22:47.119 Namespace Granularity: Not Supported 00:22:47.119 SQ Associations: Not Supported 00:22:47.119 UUID List: Not Supported 00:22:47.119 Multi-Domain Subsystem: Not Supported 00:22:47.119 Fixed Capacity Management: Not Supported 00:22:47.119 Variable Capacity Management: Not Supported 00:22:47.119 Delete Endurance Group: Not Supported 00:22:47.119 Delete NVM Set: Not Supported 00:22:47.119 Extended LBA Formats Supported: Not Supported 00:22:47.119 Flexible Data Placement Supported: Not Supported 00:22:47.119 00:22:47.119 Controller Memory Buffer Support 00:22:47.119 ================================ 00:22:47.119 Supported: No 00:22:47.119 00:22:47.119 Persistent Memory Region Support 00:22:47.119 ================================ 00:22:47.119 Supported: No 00:22:47.119 00:22:47.119 Admin Command Set Attributes 00:22:47.119 ============================ 00:22:47.119 Security Send/Receive: Not Supported 00:22:47.119 Format NVM: Not Supported 00:22:47.119 Firmware Activate/Download: Not Supported 00:22:47.119 Namespace Management: Not Supported 00:22:47.119 Device Self-Test: Not Supported 00:22:47.119 Directives: Not Supported 00:22:47.119 NVMe-MI: Not Supported 00:22:47.119 Virtualization Management: Not Supported 00:22:47.119 Doorbell Buffer Config: Not Supported 00:22:47.119 Get LBA Status Capability: Not Supported 00:22:47.119 Command & Feature Lockdown Capability: Not Supported 00:22:47.119 Abort Command Limit: 1 00:22:47.119 Async Event Request Limit: 1 00:22:47.119 Number of Firmware Slots: N/A 00:22:47.119 Firmware Slot 1 Read-Only: N/A 00:22:47.119 Firmware Activation Without Reset: N/A 00:22:47.119 Multiple Update Detection Support: N/A 00:22:47.119 Firmware Update Granularity: No Information Provided 00:22:47.119 Per-Namespace SMART Log: No 00:22:47.119 Asymmetric Namespace Access Log Page: Not Supported 00:22:47.119 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:47.119 Command Effects Log Page: Not Supported 00:22:47.119 Get Log Page Extended Data: Supported 00:22:47.119 Telemetry Log Pages: Not Supported 00:22:47.119 Persistent Event Log Pages: Not Supported 00:22:47.119 Supported Log Pages Log Page: May Support 00:22:47.119 Commands Supported & Effects Log Page: Not Supported 00:22:47.119 Feature Identifiers & Effects Log Page:May Support 00:22:47.119 NVMe-MI Commands & Effects Log Page: May Support 00:22:47.119 Data Area 4 for Telemetry Log: Not Supported 00:22:47.119 Error Log Page Entries Supported: 1 00:22:47.119 Keep Alive: Not Supported 00:22:47.119 00:22:47.119 NVM Command Set Attributes 00:22:47.119 ========================== 00:22:47.119 Submission Queue Entry Size 00:22:47.119 Max: 1 00:22:47.119 Min: 1 00:22:47.119 Completion Queue Entry Size 00:22:47.119 Max: 1 00:22:47.119 Min: 1 00:22:47.119 Number of Namespaces: 0 00:22:47.119 Compare Command: Not Supported 00:22:47.119 Write Uncorrectable Command: Not Supported 00:22:47.119 Dataset Management Command: Not Supported 00:22:47.119 Write Zeroes Command: Not Supported 00:22:47.119 Set Features Save Field: Not Supported 00:22:47.119 Reservations: Not Supported 00:22:47.119 Timestamp: Not Supported 00:22:47.119 Copy: Not Supported 00:22:47.119 Volatile Write Cache: Not Present 00:22:47.119 Atomic Write Unit (Normal): 1 00:22:47.119 Atomic Write Unit (PFail): 1 00:22:47.119 Atomic Compare & Write Unit: 1 00:22:47.119 Fused Compare & Write: Not Supported 00:22:47.119 Scatter-Gather List 00:22:47.119 SGL Command Set: Supported 00:22:47.119 SGL Keyed: Supported 00:22:47.119 SGL Bit Bucket Descriptor: Not Supported 00:22:47.119 SGL Metadata Pointer: Not Supported 00:22:47.119 Oversized SGL: Not Supported 00:22:47.119 SGL Metadata Address: Not Supported 00:22:47.119 SGL Offset: Supported 00:22:47.119 Transport SGL Data Block: Not Supported 00:22:47.119 Replay Protected Memory Block: Not Supported 00:22:47.119 00:22:47.119 Firmware Slot Information 00:22:47.119 ========================= 00:22:47.119 Active slot: 0 00:22:47.119 00:22:47.119 00:22:47.119 Error Log 00:22:47.119 ========= 00:22:47.119 00:22:47.119 Active Namespaces 00:22:47.119 ================= 00:22:47.119 Discovery Log Page 00:22:47.119 ================== 00:22:47.119 Generation Counter: 2 00:22:47.119 Number of Records: 2 00:22:47.119 Record Format: 0 00:22:47.119 00:22:47.119 Discovery Log Entry 0 00:22:47.119 ---------------------- 00:22:47.119 Transport Type: 1 (RDMA) 00:22:47.119 Address Family: 1 (IPv4) 00:22:47.119 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:47.119 Entry Flags: 00:22:47.119 Duplicate Returned Information: 0 00:22:47.119 Explicit Persistent Connection Support for Discovery: 0 00:22:47.119 Transport Requirements: 00:22:47.119 Secure Channel: Not Specified 00:22:47.119 Port ID: 1 (0x0001) 00:22:47.119 Controller ID: 65535 (0xffff) 00:22:47.119 Admin Max SQ Size: 32 00:22:47.119 Transport Service Identifier: 4420 00:22:47.119 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:47.119 Transport Address: 192.168.100.8 00:22:47.119 Transport Specific Address Subtype - RDMA 00:22:47.119 RDMA QP Service Type: 1 (Reliable Connected) 00:22:47.119 RDMA Provider Type: 1 (No provider specified) 00:22:47.119 RDMA CM Service: 1 (RDMA_CM) 00:22:47.119 Discovery Log Entry 1 00:22:47.119 ---------------------- 00:22:47.119 Transport Type: 1 (RDMA) 00:22:47.119 Address Family: 1 (IPv4) 00:22:47.119 Subsystem Type: 2 (NVM Subsystem) 00:22:47.119 Entry Flags: 00:22:47.119 Duplicate Returned Information: 0 00:22:47.119 Explicit Persistent Connection Support for Discovery: 0 00:22:47.119 Transport Requirements: 00:22:47.119 Secure Channel: Not Specified 00:22:47.119 Port ID: 1 (0x0001) 00:22:47.119 Controller ID: 65535 (0xffff) 00:22:47.119 Admin Max SQ Size: 32 00:22:47.119 Transport Service Identifier: 4420 00:22:47.119 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:47.119 Transport Address: 192.168.100.8 00:22:47.119 Transport Specific Address Subtype - RDMA 00:22:47.119 RDMA QP Service Type: 1 (Reliable Connected) 00:22:47.119 RDMA Provider Type: 1 (No provider specified) 00:22:47.119 RDMA CM Service: 1 (RDMA_CM) 00:22:47.119 23:48:35 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:47.119 get_feature(0x01) failed 00:22:47.119 get_feature(0x02) failed 00:22:47.119 get_feature(0x04) failed 00:22:47.119 ===================================================== 00:22:47.119 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:47.119 ===================================================== 00:22:47.119 Controller Capabilities/Features 00:22:47.119 ================================ 00:22:47.119 Vendor ID: 0000 00:22:47.119 Subsystem Vendor ID: 0000 00:22:47.119 Serial Number: 232e24b7169fbe3e04e2 00:22:47.119 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:47.119 Firmware Version: 6.7.0-68 00:22:47.119 Recommended Arb Burst: 6 00:22:47.119 IEEE OUI Identifier: 00 00 00 00:22:47.119 Multi-path I/O 00:22:47.119 May have multiple subsystem ports: Yes 00:22:47.119 May have multiple controllers: Yes 00:22:47.119 Associated with SR-IOV VF: No 00:22:47.119 Max Data Transfer Size: 1048576 00:22:47.119 Max Number of Namespaces: 1024 00:22:47.119 Max Number of I/O Queues: 128 00:22:47.119 NVMe Specification Version (VS): 1.3 00:22:47.119 NVMe Specification Version (Identify): 1.3 00:22:47.119 Maximum Queue Entries: 128 00:22:47.119 Contiguous Queues Required: No 00:22:47.119 Arbitration Mechanisms Supported 00:22:47.119 Weighted Round Robin: Not Supported 00:22:47.119 Vendor Specific: Not Supported 00:22:47.119 Reset Timeout: 7500 ms 00:22:47.119 Doorbell Stride: 4 bytes 00:22:47.119 NVM Subsystem Reset: Not Supported 00:22:47.119 Command Sets Supported 00:22:47.119 NVM Command Set: Supported 00:22:47.119 Boot Partition: Not Supported 00:22:47.119 Memory Page Size Minimum: 4096 bytes 00:22:47.119 Memory Page Size Maximum: 4096 bytes 00:22:47.119 Persistent Memory Region: Not Supported 00:22:47.119 Optional Asynchronous Events Supported 00:22:47.119 Namespace Attribute Notices: Supported 00:22:47.119 Firmware Activation Notices: Not Supported 00:22:47.119 ANA Change Notices: Supported 00:22:47.119 PLE Aggregate Log Change Notices: Not Supported 00:22:47.119 LBA Status Info Alert Notices: Not Supported 00:22:47.119 EGE Aggregate Log Change Notices: Not Supported 00:22:47.119 Normal NVM Subsystem Shutdown event: Not Supported 00:22:47.119 Zone Descriptor Change Notices: Not Supported 00:22:47.119 Discovery Log Change Notices: Not Supported 00:22:47.119 Controller Attributes 00:22:47.119 128-bit Host Identifier: Supported 00:22:47.119 Non-Operational Permissive Mode: Not Supported 00:22:47.119 NVM Sets: Not Supported 00:22:47.119 Read Recovery Levels: Not Supported 00:22:47.119 Endurance Groups: Not Supported 00:22:47.119 Predictable Latency Mode: Not Supported 00:22:47.119 Traffic Based Keep ALive: Supported 00:22:47.119 Namespace Granularity: Not Supported 00:22:47.119 SQ Associations: Not Supported 00:22:47.119 UUID List: Not Supported 00:22:47.119 Multi-Domain Subsystem: Not Supported 00:22:47.119 Fixed Capacity Management: Not Supported 00:22:47.119 Variable Capacity Management: Not Supported 00:22:47.119 Delete Endurance Group: Not Supported 00:22:47.119 Delete NVM Set: Not Supported 00:22:47.119 Extended LBA Formats Supported: Not Supported 00:22:47.119 Flexible Data Placement Supported: Not Supported 00:22:47.119 00:22:47.119 Controller Memory Buffer Support 00:22:47.119 ================================ 00:22:47.119 Supported: No 00:22:47.119 00:22:47.120 Persistent Memory Region Support 00:22:47.120 ================================ 00:22:47.120 Supported: No 00:22:47.120 00:22:47.120 Admin Command Set Attributes 00:22:47.120 ============================ 00:22:47.120 Security Send/Receive: Not Supported 00:22:47.120 Format NVM: Not Supported 00:22:47.120 Firmware Activate/Download: Not Supported 00:22:47.120 Namespace Management: Not Supported 00:22:47.120 Device Self-Test: Not Supported 00:22:47.120 Directives: Not Supported 00:22:47.120 NVMe-MI: Not Supported 00:22:47.120 Virtualization Management: Not Supported 00:22:47.120 Doorbell Buffer Config: Not Supported 00:22:47.120 Get LBA Status Capability: Not Supported 00:22:47.120 Command & Feature Lockdown Capability: Not Supported 00:22:47.120 Abort Command Limit: 4 00:22:47.120 Async Event Request Limit: 4 00:22:47.120 Number of Firmware Slots: N/A 00:22:47.120 Firmware Slot 1 Read-Only: N/A 00:22:47.120 Firmware Activation Without Reset: N/A 00:22:47.120 Multiple Update Detection Support: N/A 00:22:47.120 Firmware Update Granularity: No Information Provided 00:22:47.120 Per-Namespace SMART Log: Yes 00:22:47.120 Asymmetric Namespace Access Log Page: Supported 00:22:47.120 ANA Transition Time : 10 sec 00:22:47.120 00:22:47.120 Asymmetric Namespace Access Capabilities 00:22:47.120 ANA Optimized State : Supported 00:22:47.120 ANA Non-Optimized State : Supported 00:22:47.120 ANA Inaccessible State : Supported 00:22:47.120 ANA Persistent Loss State : Supported 00:22:47.120 ANA Change State : Supported 00:22:47.120 ANAGRPID is not changed : No 00:22:47.120 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:47.120 00:22:47.120 ANA Group Identifier Maximum : 128 00:22:47.120 Number of ANA Group Identifiers : 128 00:22:47.120 Max Number of Allowed Namespaces : 1024 00:22:47.120 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:47.120 Command Effects Log Page: Supported 00:22:47.120 Get Log Page Extended Data: Supported 00:22:47.120 Telemetry Log Pages: Not Supported 00:22:47.120 Persistent Event Log Pages: Not Supported 00:22:47.120 Supported Log Pages Log Page: May Support 00:22:47.120 Commands Supported & Effects Log Page: Not Supported 00:22:47.120 Feature Identifiers & Effects Log Page:May Support 00:22:47.120 NVMe-MI Commands & Effects Log Page: May Support 00:22:47.120 Data Area 4 for Telemetry Log: Not Supported 00:22:47.120 Error Log Page Entries Supported: 128 00:22:47.120 Keep Alive: Supported 00:22:47.120 Keep Alive Granularity: 1000 ms 00:22:47.120 00:22:47.120 NVM Command Set Attributes 00:22:47.120 ========================== 00:22:47.120 Submission Queue Entry Size 00:22:47.120 Max: 64 00:22:47.120 Min: 64 00:22:47.120 Completion Queue Entry Size 00:22:47.120 Max: 16 00:22:47.120 Min: 16 00:22:47.120 Number of Namespaces: 1024 00:22:47.120 Compare Command: Not Supported 00:22:47.120 Write Uncorrectable Command: Not Supported 00:22:47.120 Dataset Management Command: Supported 00:22:47.120 Write Zeroes Command: Supported 00:22:47.120 Set Features Save Field: Not Supported 00:22:47.120 Reservations: Not Supported 00:22:47.120 Timestamp: Not Supported 00:22:47.120 Copy: Not Supported 00:22:47.120 Volatile Write Cache: Present 00:22:47.120 Atomic Write Unit (Normal): 1 00:22:47.120 Atomic Write Unit (PFail): 1 00:22:47.120 Atomic Compare & Write Unit: 1 00:22:47.120 Fused Compare & Write: Not Supported 00:22:47.120 Scatter-Gather List 00:22:47.120 SGL Command Set: Supported 00:22:47.120 SGL Keyed: Supported 00:22:47.120 SGL Bit Bucket Descriptor: Not Supported 00:22:47.120 SGL Metadata Pointer: Not Supported 00:22:47.120 Oversized SGL: Not Supported 00:22:47.120 SGL Metadata Address: Not Supported 00:22:47.120 SGL Offset: Supported 00:22:47.120 Transport SGL Data Block: Not Supported 00:22:47.120 Replay Protected Memory Block: Not Supported 00:22:47.120 00:22:47.120 Firmware Slot Information 00:22:47.120 ========================= 00:22:47.120 Active slot: 0 00:22:47.120 00:22:47.120 Asymmetric Namespace Access 00:22:47.120 =========================== 00:22:47.120 Change Count : 0 00:22:47.120 Number of ANA Group Descriptors : 1 00:22:47.120 ANA Group Descriptor : 0 00:22:47.120 ANA Group ID : 1 00:22:47.120 Number of NSID Values : 1 00:22:47.120 Change Count : 0 00:22:47.120 ANA State : 1 00:22:47.120 Namespace Identifier : 1 00:22:47.120 00:22:47.120 Commands Supported and Effects 00:22:47.120 ============================== 00:22:47.120 Admin Commands 00:22:47.120 -------------- 00:22:47.120 Get Log Page (02h): Supported 00:22:47.120 Identify (06h): Supported 00:22:47.120 Abort (08h): Supported 00:22:47.120 Set Features (09h): Supported 00:22:47.120 Get Features (0Ah): Supported 00:22:47.120 Asynchronous Event Request (0Ch): Supported 00:22:47.120 Keep Alive (18h): Supported 00:22:47.120 I/O Commands 00:22:47.120 ------------ 00:22:47.120 Flush (00h): Supported 00:22:47.120 Write (01h): Supported LBA-Change 00:22:47.120 Read (02h): Supported 00:22:47.120 Write Zeroes (08h): Supported LBA-Change 00:22:47.120 Dataset Management (09h): Supported 00:22:47.120 00:22:47.120 Error Log 00:22:47.120 ========= 00:22:47.120 Entry: 0 00:22:47.120 Error Count: 0x3 00:22:47.120 Submission Queue Id: 0x0 00:22:47.120 Command Id: 0x5 00:22:47.120 Phase Bit: 0 00:22:47.120 Status Code: 0x2 00:22:47.120 Status Code Type: 0x0 00:22:47.120 Do Not Retry: 1 00:22:47.120 Error Location: 0x28 00:22:47.120 LBA: 0x0 00:22:47.120 Namespace: 0x0 00:22:47.120 Vendor Log Page: 0x0 00:22:47.120 ----------- 00:22:47.120 Entry: 1 00:22:47.120 Error Count: 0x2 00:22:47.120 Submission Queue Id: 0x0 00:22:47.120 Command Id: 0x5 00:22:47.120 Phase Bit: 0 00:22:47.120 Status Code: 0x2 00:22:47.120 Status Code Type: 0x0 00:22:47.120 Do Not Retry: 1 00:22:47.120 Error Location: 0x28 00:22:47.120 LBA: 0x0 00:22:47.120 Namespace: 0x0 00:22:47.120 Vendor Log Page: 0x0 00:22:47.120 ----------- 00:22:47.120 Entry: 2 00:22:47.120 Error Count: 0x1 00:22:47.120 Submission Queue Id: 0x0 00:22:47.120 Command Id: 0x0 00:22:47.120 Phase Bit: 0 00:22:47.120 Status Code: 0x2 00:22:47.120 Status Code Type: 0x0 00:22:47.120 Do Not Retry: 1 00:22:47.120 Error Location: 0x28 00:22:47.120 LBA: 0x0 00:22:47.120 Namespace: 0x0 00:22:47.120 Vendor Log Page: 0x0 00:22:47.120 00:22:47.120 Number of Queues 00:22:47.120 ================ 00:22:47.120 Number of I/O Submission Queues: 128 00:22:47.120 Number of I/O Completion Queues: 128 00:22:47.120 00:22:47.120 ZNS Specific Controller Data 00:22:47.120 ============================ 00:22:47.120 Zone Append Size Limit: 0 00:22:47.120 00:22:47.120 00:22:47.120 Active Namespaces 00:22:47.120 ================= 00:22:47.120 get_feature(0x05) failed 00:22:47.120 Namespace ID:1 00:22:47.120 Command Set Identifier: NVM (00h) 00:22:47.120 Deallocate: Supported 00:22:47.120 Deallocated/Unwritten Error: Not Supported 00:22:47.120 Deallocated Read Value: Unknown 00:22:47.120 Deallocate in Write Zeroes: Not Supported 00:22:47.120 Deallocated Guard Field: 0xFFFF 00:22:47.120 Flush: Supported 00:22:47.120 Reservation: Not Supported 00:22:47.120 Namespace Sharing Capabilities: Multiple Controllers 00:22:47.120 Size (in LBAs): 3125627568 (1490GiB) 00:22:47.120 Capacity (in LBAs): 3125627568 (1490GiB) 00:22:47.120 Utilization (in LBAs): 3125627568 (1490GiB) 00:22:47.120 UUID: 7b22ce06-fc6e-4f37-a24c-29752add10df 00:22:47.120 Thin Provisioning: Not Supported 00:22:47.120 Per-NS Atomic Units: Yes 00:22:47.120 Atomic Boundary Size (Normal): 0 00:22:47.120 Atomic Boundary Size (PFail): 0 00:22:47.120 Atomic Boundary Offset: 0 00:22:47.120 NGUID/EUI64 Never Reused: No 00:22:47.120 ANA group ID: 1 00:22:47.120 Namespace Write Protected: No 00:22:47.120 Number of LBA Formats: 1 00:22:47.120 Current LBA Format: LBA Format #00 00:22:47.120 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:47.120 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:47.120 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:47.120 rmmod nvme_rdma 00:22:47.377 rmmod nvme_fabrics 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:22:47.377 23:48:36 nvmf_rdma.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:49.908 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:49.908 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:51.287 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:22:51.287 00:22:51.287 real 0m13.307s 00:22:51.287 user 0m3.414s 00:22:51.287 sys 0m7.388s 00:22:51.287 23:48:40 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@1118 -- # xtrace_disable 00:22:51.287 23:48:40 nvmf_rdma.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.287 ************************************ 00:22:51.287 END TEST nvmf_identify_kernel_target 00:22:51.287 ************************************ 00:22:51.287 23:48:40 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:22:51.287 23:48:40 nvmf_rdma -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:51.287 23:48:40 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:22:51.287 23:48:40 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:22:51.287 23:48:40 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:22:51.287 ************************************ 00:22:51.287 START TEST nvmf_auth_host 00:22:51.287 ************************************ 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:51.287 * Looking for test storage... 00:22:51.287 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.287 23:48:40 nvmf_rdma.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.288 23:48:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.545 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:51.545 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:51.545 23:48:40 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:51.545 23:48:40 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:56.810 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:56.810 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:56.810 Found net devices under 0000:da:00.0: mlx_0_0 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:56.810 Found net devices under 0000:da:00.1: mlx_0_1 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:56.810 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:56.811 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:56.811 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:22:56.811 altname enp218s0f0np0 00:22:56.811 altname ens818f0np0 00:22:56.811 inet 192.168.100.8/24 scope global mlx_0_0 00:22:56.811 valid_lft forever preferred_lft forever 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:56.811 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:56.811 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:22:56.811 altname enp218s0f1np1 00:22:56.811 altname ens818f1np1 00:22:56.811 inet 192.168.100.9/24 scope global mlx_0_1 00:22:56.811 valid_lft forever preferred_lft forever 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:56.811 192.168.100.9' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:56.811 192.168.100.9' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:56.811 192.168.100.9' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1557644 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1557644 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1557644 ']' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.811 23:48:45 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:57.378 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:57.378 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:22:57.378 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.378 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:57.378 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=eb24cd8adbf601e56fa782f9b4ecc5ff 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.xwA 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key eb24cd8adbf601e56fa782f9b4ecc5ff 0 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 eb24cd8adbf601e56fa782f9b4ecc5ff 0 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=eb24cd8adbf601e56fa782f9b4ecc5ff 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.xwA 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.xwA 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xwA 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:57.637 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=61945e7efb75b494ae5e3de61f27406293a4caa00c5d11d6977b4607e4e345de 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.4TO 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 61945e7efb75b494ae5e3de61f27406293a4caa00c5d11d6977b4607e4e345de 3 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 61945e7efb75b494ae5e3de61f27406293a4caa00c5d11d6977b4607e4e345de 3 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=61945e7efb75b494ae5e3de61f27406293a4caa00c5d11d6977b4607e4e345de 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.4TO 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.4TO 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4TO 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e0d45e6d3eeffc6ec0920c711bd047fcc498567b6bff2e42 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.50H 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e0d45e6d3eeffc6ec0920c711bd047fcc498567b6bff2e42 0 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e0d45e6d3eeffc6ec0920c711bd047fcc498567b6bff2e42 0 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e0d45e6d3eeffc6ec0920c711bd047fcc498567b6bff2e42 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.50H 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.50H 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.50H 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8c494d90385eab50a96838d872bb8d730bad8a9355ea2720 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OJq 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8c494d90385eab50a96838d872bb8d730bad8a9355ea2720 2 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8c494d90385eab50a96838d872bb8d730bad8a9355ea2720 2 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8c494d90385eab50a96838d872bb8d730bad8a9355ea2720 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:57.638 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OJq 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OJq 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.OJq 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=603d52274408090519486f4492047445 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8r8 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 603d52274408090519486f4492047445 1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 603d52274408090519486f4492047445 1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=603d52274408090519486f4492047445 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8r8 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8r8 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8r8 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6c03580eaca128d76d96f73a7d08229f 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.wvK 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6c03580eaca128d76d96f73a7d08229f 1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6c03580eaca128d76d96f73a7d08229f 1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.897 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6c03580eaca128d76d96f73a7d08229f 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.wvK 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.wvK 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.wvK 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d98714d837240383b793b44e8170ca6b21cf9906a8b66eea 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.FY6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d98714d837240383b793b44e8170ca6b21cf9906a8b66eea 2 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d98714d837240383b793b44e8170ca6b21cf9906a8b66eea 2 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d98714d837240383b793b44e8170ca6b21cf9906a8b66eea 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.FY6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.FY6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.FY6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de46d9d16be69ab09357b9b97e45e7b6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.hep 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de46d9d16be69ab09357b9b97e45e7b6 0 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de46d9d16be69ab09357b9b97e45e7b6 0 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de46d9d16be69ab09357b9b97e45e7b6 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.hep 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.hep 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.hep 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a293c40496bf025a7c17fb9fc6565c96f6948285e605b5c5eb3bece31fd6cb1f 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xKu 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a293c40496bf025a7c17fb9fc6565c96f6948285e605b5c5eb3bece31fd6cb1f 3 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a293c40496bf025a7c17fb9fc6565c96f6948285e605b5c5eb3bece31fd6cb1f 3 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a293c40496bf025a7c17fb9fc6565c96f6948285e605b5c5eb3bece31fd6cb1f 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:57.898 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xKu 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xKu 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xKu 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1557644 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@823 -- # '[' -z 1557644 ']' 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@828 -- # local max_retries=100 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@832 -- # xtrace_disable 00:22:58.158 23:48:46 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@856 -- # return 0 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xwA 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4TO ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4TO 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.50H 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.OJq ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.OJq 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8r8 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.wvK ]] 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.wvK 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.158 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.417 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.417 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.FY6 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.hep ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.hep 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xKu 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.418 23:48:47 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:23:00.953 Waiting for block devices as requested 00:23:00.953 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:23:00.953 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:00.953 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:00.953 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:00.953 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:00.953 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:00.953 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:01.210 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:01.210 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:01.210 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:23:01.210 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:23:01.468 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:23:01.468 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:23:01.468 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:23:01.468 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:23:01.726 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:23:01.726 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1656 -- # local device=nvme0n1 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1658 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1659 -- # [[ none != none ]] 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:02.293 No valid GPT data, bailing 00:23:02.293 23:48:51 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:02.294 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:23:02.553 00:23:02.553 Discovery Log Number of Records 2, Generation counter 2 00:23:02.553 =====Discovery Log Entry 0====== 00:23:02.553 trtype: rdma 00:23:02.553 adrfam: ipv4 00:23:02.553 subtype: current discovery subsystem 00:23:02.553 treq: not specified, sq flow control disable supported 00:23:02.553 portid: 1 00:23:02.553 trsvcid: 4420 00:23:02.553 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:02.553 traddr: 192.168.100.8 00:23:02.553 eflags: none 00:23:02.553 rdma_prtype: not specified 00:23:02.553 rdma_qptype: connected 00:23:02.553 rdma_cms: rdma-cm 00:23:02.553 rdma_pkey: 0x0000 00:23:02.553 =====Discovery Log Entry 1====== 00:23:02.553 trtype: rdma 00:23:02.553 adrfam: ipv4 00:23:02.553 subtype: nvme subsystem 00:23:02.553 treq: not specified, sq flow control disable supported 00:23:02.553 portid: 1 00:23:02.553 trsvcid: 4420 00:23:02.553 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:02.553 traddr: 192.168.100.8 00:23:02.553 eflags: none 00:23:02.553 rdma_prtype: not specified 00:23:02.553 rdma_qptype: connected 00:23:02.553 rdma_cms: rdma-cm 00:23:02.553 rdma_pkey: 0x0000 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.553 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.813 nvme0n1 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:02.813 23:48:51 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:02.814 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.814 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:02.814 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.072 nvme0n1 00:23:03.072 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.072 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.072 23:48:51 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.072 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.072 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.073 23:48:51 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.073 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.376 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.377 nvme0n1 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.377 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 nvme0n1 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.691 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.950 nvme0n1 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:03.950 23:48:52 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.209 nvme0n1 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.209 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.210 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.210 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.210 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.210 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.467 nvme0n1 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.467 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.725 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.983 nvme0n1 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:04.983 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:04.984 23:48:53 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.242 nvme0n1 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.242 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.243 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.501 nvme0n1 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:05.501 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.502 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.760 nvme0n1 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:05.760 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.018 23:48:54 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.276 nvme0n1 00:23:06.276 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.276 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.276 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.276 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.277 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.535 nvme0n1 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.535 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:06.793 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:06.794 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 nvme0n1 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.052 23:48:55 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.310 nvme0n1 00:23:07.310 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.310 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.310 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.310 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.310 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.311 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.569 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.828 nvme0n1 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:07.828 23:48:56 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.394 nvme0n1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.394 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.961 nvme0n1 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:08.961 23:48:57 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.220 nvme0n1 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.220 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:09.479 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:09.480 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.480 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.480 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.738 nvme0n1 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.738 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:09.997 23:48:58 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.256 nvme0n1 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.256 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:10.515 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.084 nvme0n1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.084 23:48:59 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.652 nvme0n1 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.652 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:11.909 23:49:00 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.475 nvme0n1 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:12.475 23:49:01 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.066 nvme0n1 00:23:13.066 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.066 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.066 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.066 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.066 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:13.324 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.325 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 nvme0n1 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:13.891 23:49:02 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.149 nvme0n1 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.149 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.150 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 nvme0n1 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.406 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.663 nvme0n1 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.663 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.921 nvme0n1 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.921 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.179 23:49:03 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.179 nvme0n1 00:23:15.179 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.179 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.179 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.179 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.437 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.695 nvme0n1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.695 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.953 nvme0n1 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:15.953 23:49:04 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.211 nvme0n1 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.211 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.468 nvme0n1 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.468 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.726 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.984 nvme0n1 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.984 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:16.985 23:49:05 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.242 nvme0n1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.242 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.807 nvme0n1 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:17.808 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 nvme0n1 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.066 23:49:06 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.066 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.067 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.067 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.067 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.632 nvme0n1 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.632 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 nvme0n1 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:18.890 23:49:07 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.455 nvme0n1 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.455 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:19.456 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 nvme0n1 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.021 23:49:08 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.279 nvme0n1 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.279 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.537 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 nvme0n1 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:20.795 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.053 23:49:09 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.311 nvme0n1 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.311 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:21.569 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.135 nvme0n1 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.135 23:49:10 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.135 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.701 nvme0n1 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.701 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:22.959 23:49:11 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.525 nvme0n1 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:23.525 23:49:12 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.090 nvme0n1 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.090 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.348 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.913 nvme0n1 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:24.913 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:24.914 23:49:13 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.170 nvme0n1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.170 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.427 nvme0n1 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.427 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.685 nvme0n1 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.685 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:25.942 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.943 nvme0n1 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.943 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.200 23:49:14 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.457 nvme0n1 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.457 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.714 nvme0n1 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:26.714 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.715 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.972 nvme0n1 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:26.972 23:49:15 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.228 nvme0n1 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.228 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:27.485 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.486 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.743 nvme0n1 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:27.743 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 nvme0n1 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.001 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.002 23:49:16 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 nvme0n1 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.260 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.518 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 nvme0n1 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:28.776 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.032 nvme0n1 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.032 23:49:17 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.032 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.032 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.032 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.033 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.290 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 nvme0n1 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.548 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.806 nvme0n1 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:29.806 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.064 23:49:18 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.322 nvme0n1 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.322 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.580 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.871 nvme0n1 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:30.871 23:49:19 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.454 nvme0n1 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.454 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:31.455 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.021 nvme0n1 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.021 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.022 23:49:20 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.280 nvme0n1 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.280 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWIyNGNkOGFkYmY2MDFlNTZmYTc4MmY5YjRlY2M1ZmZNSXAD: 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NjE5NDVlN2VmYjc1YjQ5NGFlNWUzZGU2MWYyNzQwNjI5M2E0Y2FhMDBjNWQxMWQ2OTc3YjQ2MDdlNGUzNDVkZYQPQXs=: 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:32.538 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.104 nvme0n1 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.104 23:49:21 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:33.104 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.038 nvme0n1 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjAzZDUyMjc0NDA4MDkwNTE5NDg2ZjQ0OTIwNDc0NDVmKiIu: 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NmMwMzU4MGVhY2ExMjhkNzZkOTZmNzNhN2QwODIyOWbJ7yTB: 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.038 23:49:22 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.604 nvme0n1 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDk4NzE0ZDgzNzI0MDM4M2I3OTNiNDRlODE3MGNhNmIyMWNmOTkwNmE4YjY2ZWVh3diDqw==: 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGU0NmQ5ZDE2YmU2OWFiMDkzNTdiOWI5N2U0NWU3YjbudZRo: 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:34.604 23:49:23 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.169 nvme0n1 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.169 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTI5M2M0MDQ5NmJmMDI1YTdjMTdmYjlmYzY1NjVjOTZmNjk0ODI4NWU2MDViNWM1ZWIzYmVjZTMxZmQ2Y2IxZuKraIw=: 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.427 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.992 nvme0n1 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTBkNDVlNmQzZWVmZmM2ZWMwOTIwYzcxMWJkMDQ3ZmNjNDk4NTY3YjZiZmYyZTQyDN9K/w==: 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGM0OTRkOTAzODVlYWI1MGE5NjgzOGQ4NzJiYjhkNzMwYmFkOGE5MzU1ZWEyNzIwsjy1/A==: 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.992 request: 00:23:35.992 { 00:23:35.992 "name": "nvme0", 00:23:35.992 "trtype": "rdma", 00:23:35.992 "traddr": "192.168.100.8", 00:23:35.992 "adrfam": "ipv4", 00:23:35.992 "trsvcid": "4420", 00:23:35.992 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:35.992 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:35.992 "prchk_reftag": false, 00:23:35.992 "prchk_guard": false, 00:23:35.992 "hdgst": false, 00:23:35.992 "ddgst": false, 00:23:35.992 "method": "bdev_nvme_attach_controller", 00:23:35.992 "req_id": 1 00:23:35.992 } 00:23:35.992 Got JSON-RPC error response 00:23:35.992 response: 00:23:35.992 { 00:23:35.992 "code": -5, 00:23:35.992 "message": "Input/output error" 00:23:35.992 } 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:35.992 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:36.250 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.250 23:49:24 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:36.250 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.250 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.250 23:49:24 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.250 request: 00:23:36.250 { 00:23:36.250 "name": "nvme0", 00:23:36.250 "trtype": "rdma", 00:23:36.250 "traddr": "192.168.100.8", 00:23:36.250 "adrfam": "ipv4", 00:23:36.250 "trsvcid": "4420", 00:23:36.250 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:36.250 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:36.250 "prchk_reftag": false, 00:23:36.250 "prchk_guard": false, 00:23:36.250 "hdgst": false, 00:23:36.250 "ddgst": false, 00:23:36.250 "dhchap_key": "key2", 00:23:36.250 "method": "bdev_nvme_attach_controller", 00:23:36.250 "req_id": 1 00:23:36.250 } 00:23:36.250 Got JSON-RPC error response 00:23:36.250 response: 00:23:36.250 { 00:23:36.250 "code": -5, 00:23:36.250 "message": "Input/output error" 00:23:36.250 } 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:36.250 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@642 -- # local es=0 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@644 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@630 -- # local arg=rpc_cmd 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # type -t rpc_cmd 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:36.251 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.508 request: 00:23:36.508 { 00:23:36.508 "name": "nvme0", 00:23:36.508 "trtype": "rdma", 00:23:36.508 "traddr": "192.168.100.8", 00:23:36.508 "adrfam": "ipv4", 00:23:36.508 "trsvcid": "4420", 00:23:36.508 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:36.508 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:36.508 "prchk_reftag": false, 00:23:36.508 "prchk_guard": false, 00:23:36.508 "hdgst": false, 00:23:36.508 "ddgst": false, 00:23:36.508 "dhchap_key": "key1", 00:23:36.508 "dhchap_ctrlr_key": "ckey2", 00:23:36.508 "method": "bdev_nvme_attach_controller", 00:23:36.508 "req_id": 1 00:23:36.508 } 00:23:36.508 Got JSON-RPC error response 00:23:36.508 response: 00:23:36.508 { 00:23:36.508 "code": -5, 00:23:36.508 "message": "Input/output error" 00:23:36.508 } 00:23:36.508 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@581 -- # [[ 1 == 0 ]] 00:23:36.508 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@645 -- # es=1 00:23:36.508 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:36.509 rmmod nvme_rdma 00:23:36.509 rmmod nvme_fabrics 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1557644 ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1557644 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@942 -- # '[' -z 1557644 ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@946 -- # kill -0 1557644 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@947 -- # uname 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1557644 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1557644' 00:23:36.509 killing process with pid 1557644 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@961 -- # kill 1557644 00:23:36.509 23:49:25 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@966 -- # wait 1557644 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:36.766 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:36.767 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:36.767 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:36.767 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:36.767 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:23:36.767 23:49:25 nvmf_rdma.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:39.292 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:39.292 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:39.550 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:40.927 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:23:40.927 23:49:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xwA /tmp/spdk.key-null.50H /tmp/spdk.key-sha256.8r8 /tmp/spdk.key-sha384.FY6 /tmp/spdk.key-sha512.xKu /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:40.927 23:49:29 nvmf_rdma.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:43.455 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:43.455 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:43.455 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:43.455 00:23:43.455 real 0m52.281s 00:23:43.455 user 0m48.153s 00:23:43.455 sys 0m11.490s 00:23:43.455 23:49:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@1118 -- # xtrace_disable 00:23:43.455 23:49:32 nvmf_rdma.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.455 ************************************ 00:23:43.455 END TEST nvmf_auth_host 00:23:43.455 ************************************ 00:23:43.713 23:49:32 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:23:43.713 23:49:32 nvmf_rdma -- nvmf/nvmf.sh@107 -- # [[ rdma == \t\c\p ]] 00:23:43.713 23:49:32 nvmf_rdma -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:23:43.713 23:49:32 nvmf_rdma -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:23:43.713 23:49:32 nvmf_rdma -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:23:43.713 23:49:32 nvmf_rdma -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:43.713 23:49:32 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:23:43.713 23:49:32 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:23:43.713 23:49:32 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:23:43.713 ************************************ 00:23:43.713 START TEST nvmf_bdevperf 00:23:43.713 ************************************ 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:43.713 * Looking for test storage... 00:23:43.713 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.713 23:49:32 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:48.977 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:48.977 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:48.977 Found net devices under 0000:da:00.0: mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:48.977 Found net devices under 0000:da:00.1: mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:48.977 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:48.977 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:23:48.977 altname enp218s0f0np0 00:23:48.977 altname ens818f0np0 00:23:48.977 inet 192.168.100.8/24 scope global mlx_0_0 00:23:48.977 valid_lft forever preferred_lft forever 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:48.977 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:48.977 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:23:48.977 altname enp218s0f1np1 00:23:48.977 altname ens818f1np1 00:23:48.977 inet 192.168.100.9/24 scope global mlx_0_1 00:23:48.977 valid_lft forever preferred_lft forever 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:48.977 192.168.100.9' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:48.977 192.168.100.9' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:48.977 192.168.100.9' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:48.977 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1571142 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1571142 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1571142 ']' 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:48.978 23:49:37 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:49.233 [2024-07-15 23:49:37.994085] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:49.233 [2024-07-15 23:49:37.994134] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.233 [2024-07-15 23:49:38.049379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:49.233 [2024-07-15 23:49:38.129772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.233 [2024-07-15 23:49:38.129808] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.233 [2024-07-15 23:49:38.129814] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.234 [2024-07-15 23:49:38.129820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.234 [2024-07-15 23:49:38.129825] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.234 [2024-07-15 23:49:38.129947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.234 [2024-07-15 23:49:38.129974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.234 [2024-07-15 23:49:38.129975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 [2024-07-15 23:49:38.858010] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7fb200/0x7ff6f0) succeed. 00:23:50.164 [2024-07-15 23:49:38.866959] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7fc7a0/0x840d80) succeed. 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 Malloc0 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:50.164 23:49:38 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:50.164 [2024-07-15 23:49:39.007165] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.164 { 00:23:50.164 "params": { 00:23:50.164 "name": "Nvme$subsystem", 00:23:50.164 "trtype": "$TEST_TRANSPORT", 00:23:50.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.164 "adrfam": "ipv4", 00:23:50.164 "trsvcid": "$NVMF_PORT", 00:23:50.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.164 "hdgst": ${hdgst:-false}, 00:23:50.164 "ddgst": ${ddgst:-false} 00:23:50.164 }, 00:23:50.164 "method": "bdev_nvme_attach_controller" 00:23:50.164 } 00:23:50.164 EOF 00:23:50.164 )") 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:50.164 23:49:39 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:50.164 "params": { 00:23:50.164 "name": "Nvme1", 00:23:50.164 "trtype": "rdma", 00:23:50.164 "traddr": "192.168.100.8", 00:23:50.164 "adrfam": "ipv4", 00:23:50.164 "trsvcid": "4420", 00:23:50.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.164 "hdgst": false, 00:23:50.164 "ddgst": false 00:23:50.164 }, 00:23:50.164 "method": "bdev_nvme_attach_controller" 00:23:50.164 }' 00:23:50.164 [2024-07-15 23:49:39.053017] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:50.164 [2024-07-15 23:49:39.053058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571390 ] 00:23:50.164 [2024-07-15 23:49:39.106897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.420 [2024-07-15 23:49:39.180296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.420 Running I/O for 1 seconds... 00:23:51.791 00:23:51.791 Latency(us) 00:23:51.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:51.791 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:51.791 Verification LBA range: start 0x0 length 0x4000 00:23:51.791 Nvme1n1 : 1.01 17886.17 69.87 0.00 0.00 7110.74 1911.47 12233.39 00:23:51.791 =================================================================================================================== 00:23:51.791 Total : 17886.17 69.87 0.00 0.00 7110.74 1911.47 12233.39 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1571629 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.791 { 00:23:51.791 "params": { 00:23:51.791 "name": "Nvme$subsystem", 00:23:51.791 "trtype": "$TEST_TRANSPORT", 00:23:51.791 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.791 "adrfam": "ipv4", 00:23:51.791 "trsvcid": "$NVMF_PORT", 00:23:51.791 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.791 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.791 "hdgst": ${hdgst:-false}, 00:23:51.791 "ddgst": ${ddgst:-false} 00:23:51.791 }, 00:23:51.791 "method": "bdev_nvme_attach_controller" 00:23:51.791 } 00:23:51.791 EOF 00:23:51.791 )") 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:51.791 23:49:40 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:51.791 "params": { 00:23:51.791 "name": "Nvme1", 00:23:51.791 "trtype": "rdma", 00:23:51.791 "traddr": "192.168.100.8", 00:23:51.791 "adrfam": "ipv4", 00:23:51.791 "trsvcid": "4420", 00:23:51.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.791 "hdgst": false, 00:23:51.791 "ddgst": false 00:23:51.791 }, 00:23:51.791 "method": "bdev_nvme_attach_controller" 00:23:51.791 }' 00:23:51.791 [2024-07-15 23:49:40.619031] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:51.791 [2024-07-15 23:49:40.619081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1571629 ] 00:23:51.791 [2024-07-15 23:49:40.674687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.791 [2024-07-15 23:49:40.744096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.050 Running I/O for 15 seconds... 00:23:55.327 23:49:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1571142 00:23:55.327 23:49:43 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:55.896 [2024-07-15 23:49:44.608309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:123296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.896 [2024-07-15 23:49:44.608670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.896 [2024-07-15 23:49:44.608676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:123408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:123416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:123424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:123432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:123448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:123464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:123488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:123504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:123528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.608987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.608993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:123552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:123568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:123584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:123616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:123672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.897 [2024-07-15 23:49:44.609241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.897 [2024-07-15 23:49:44.609248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:123696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:123736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:123800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:123848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:123864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:55.898 [2024-07-15 23:49:44.609618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:122904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:122952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:122960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x183e00 00:23:55.898 [2024-07-15 23:49:44.609816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.898 [2024-07-15 23:49:44.609824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:123000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:123008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.609992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.609998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:123136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.610121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183e00 00:23:55.899 [2024-07-15 23:49:44.610127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:1c9db000 sqhd:52b0 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.611961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:55.899 [2024-07-15 23:49:44.611972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:55.899 [2024-07-15 23:49:44.611978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123168 len:8 PRP1 0x0 PRP2 0x0 00:23:55.899 [2024-07-15 23:49:44.611985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:55.899 [2024-07-15 23:49:44.612024] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:23:55.899 [2024-07-15 23:49:44.614723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:55.899 [2024-07-15 23:49:44.628522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:55.899 [2024-07-15 23:49:44.631118] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:55.899 [2024-07-15 23:49:44.631142] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:55.899 [2024-07-15 23:49:44.631148] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:56.833 [2024-07-15 23:49:45.635142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:56.833 [2024-07-15 23:49:45.635193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:56.833 [2024-07-15 23:49:45.635758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:56.833 [2024-07-15 23:49:45.635772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:56.833 [2024-07-15 23:49:45.635783] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:56.833 [2024-07-15 23:49:45.639498] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:56.833 [2024-07-15 23:49:45.640281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.833 [2024-07-15 23:49:45.652809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:56.833 [2024-07-15 23:49:45.655752] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:56.833 [2024-07-15 23:49:45.655777] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:56.833 [2024-07-15 23:49:45.655786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:57.768 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1571142 Killed "${NVMF_APP[@]}" "$@" 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1572558 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1572558 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@823 -- # '[' -z 1572558 ']' 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@828 -- # local max_retries=100 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@832 -- # xtrace_disable 00:23:57.768 23:49:46 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:57.768 [2024-07-15 23:49:46.635548] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:23:57.768 [2024-07-15 23:49:46.635591] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.768 [2024-07-15 23:49:46.659904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:57.768 [2024-07-15 23:49:46.659928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:57.768 [2024-07-15 23:49:46.660121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:57.768 [2024-07-15 23:49:46.660129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:57.768 [2024-07-15 23:49:46.660136] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:57.768 [2024-07-15 23:49:46.662897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.768 [2024-07-15 23:49:46.666667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:57.768 [2024-07-15 23:49:46.669164] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:57.768 [2024-07-15 23:49:46.669183] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:57.768 [2024-07-15 23:49:46.669189] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:57.768 [2024-07-15 23:49:46.691202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:58.026 [2024-07-15 23:49:46.771353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.026 [2024-07-15 23:49:46.771384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.026 [2024-07-15 23:49:46.771391] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.026 [2024-07-15 23:49:46.771397] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.026 [2024-07-15 23:49:46.771401] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.026 [2024-07-15 23:49:46.771436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.026 [2024-07-15 23:49:46.771463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.026 [2024-07-15 23:49:46.771463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@856 -- # return 0 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.592 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.592 [2024-07-15 23:49:47.507701] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8e0200/0x8e46f0) succeed. 00:23:58.592 [2024-07-15 23:49:47.516813] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8e17a0/0x925d80) succeed. 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.850 Malloc0 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:58.850 [2024-07-15 23:49:47.659439] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:23:58.850 23:49:47 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1571629 00:23:58.850 [2024-07-15 23:49:47.673175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:58.850 [2024-07-15 23:49:47.673203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:58.850 [2024-07-15 23:49:47.673379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:58.850 [2024-07-15 23:49:47.673388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:58.850 [2024-07-15 23:49:47.673396] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:58.850 [2024-07-15 23:49:47.676144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:58.850 [2024-07-15 23:49:47.681173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:58.850 [2024-07-15 23:49:47.723536] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.991 00:24:06.991 Latency(us) 00:24:06.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.991 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:06.991 Verification LBA range: start 0x0 length 0x4000 00:24:06.991 Nvme1n1 : 15.01 13123.60 51.26 10339.48 0.00 5434.22 339.38 1030600.41 00:24:06.991 =================================================================================================================== 00:24:06.991 Total : 13123.60 51.26 10339.48 0.00 5434.22 339.38 1030600.41 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:07.248 rmmod nvme_rdma 00:24:07.248 rmmod nvme_fabrics 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1572558 ']' 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1572558 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@942 -- # '[' -z 1572558 ']' 00:24:07.248 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@946 -- # kill -0 1572558 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@947 -- # uname 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1572558 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@948 -- # process_name=reactor_1 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' reactor_1 = sudo ']' 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1572558' 00:24:07.505 killing process with pid 1572558 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@961 -- # kill 1572558 00:24:07.505 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@966 -- # wait 1572558 00:24:07.762 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:07.762 23:49:56 nvmf_rdma.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:07.762 00:24:07.762 real 0m24.027s 00:24:07.762 user 1m4.084s 00:24:07.762 sys 0m5.067s 00:24:07.762 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:07.762 23:49:56 nvmf_rdma.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:07.762 ************************************ 00:24:07.762 END TEST nvmf_bdevperf 00:24:07.762 ************************************ 00:24:07.762 23:49:56 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:24:07.762 23:49:56 nvmf_rdma -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:07.762 23:49:56 nvmf_rdma -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:07.762 23:49:56 nvmf_rdma -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:07.762 23:49:56 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:07.762 ************************************ 00:24:07.762 START TEST nvmf_target_disconnect 00:24:07.762 ************************************ 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:07.762 * Looking for test storage... 00:24:07.762 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.762 23:49:56 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.763 23:49:56 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:13.032 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:13.032 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:13.032 Found net devices under 0000:da:00.0: mlx_0_0 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:13.032 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:13.033 Found net devices under 0000:da:00.1: mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:13.033 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:13.033 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:13.033 altname enp218s0f0np0 00:24:13.033 altname ens818f0np0 00:24:13.033 inet 192.168.100.8/24 scope global mlx_0_0 00:24:13.033 valid_lft forever preferred_lft forever 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:13.033 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:13.033 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:13.033 altname enp218s0f1np1 00:24:13.033 altname ens818f1np1 00:24:13.033 inet 192.168.100.9/24 scope global mlx_0_1 00:24:13.033 valid_lft forever preferred_lft forever 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:13.033 192.168.100.9' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:13.033 192.168.100.9' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:13.033 192.168.100.9' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:13.033 23:50:01 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:13.292 ************************************ 00:24:13.292 START TEST nvmf_target_disconnect_tc1 00:24:13.292 ************************************ 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc1 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # local es=0 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@630 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@634 -- # case "$(type -t "$arg")" in 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:13.292 23:50:02 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:13.292 [2024-07-15 23:50:02.140236] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:13.292 [2024-07-15 23:50:02.140325] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:13.292 [2024-07-15 23:50:02.140347] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:24:14.227 [2024-07-15 23:50:03.144437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:14.227 [2024-07-15 23:50:03.144498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:14.227 [2024-07-15 23:50:03.144524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:24:14.227 [2024-07-15 23:50:03.144593] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:14.227 [2024-07-15 23:50:03.144615] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:14.227 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:14.227 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:14.227 Initializing NVMe Controllers 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@645 -- # es=1 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # (( es > 128 )) 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@664 -- # [[ -n '' ]] 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@669 -- # (( !es == 0 )) 00:24:14.227 00:24:14.227 real 0m1.119s 00:24:14.227 user 0m0.947s 00:24:14.227 sys 0m0.161s 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:14.227 ************************************ 00:24:14.227 END TEST nvmf_target_disconnect_tc1 00:24:14.227 ************************************ 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:14.227 23:50:03 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:14.488 ************************************ 00:24:14.488 START TEST nvmf_target_disconnect_tc2 00:24:14.488 ************************************ 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc2 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1577478 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1577478 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1577478 ']' 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:14.488 23:50:03 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:14.488 [2024-07-15 23:50:03.283802] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:14.488 [2024-07-15 23:50:03.283847] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.488 [2024-07-15 23:50:03.350296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.488 [2024-07-15 23:50:03.420975] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.488 [2024-07-15 23:50:03.421017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.488 [2024-07-15 23:50:03.421023] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.488 [2024-07-15 23:50:03.421029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.488 [2024-07-15 23:50:03.421033] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.488 [2024-07-15 23:50:03.421163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:14.488 [2024-07-15 23:50:03.421276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:14.488 [2024-07-15 23:50:03.421361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:14.488 [2024-07-15 23:50:03.421362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.427 Malloc0 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.427 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.428 [2024-07-15 23:50:04.179771] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd03cf0/0xd0f8c0) succeed. 00:24:15.428 [2024-07-15 23:50:04.189146] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd05330/0xd50f50) succeed. 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.428 [2024-07-15 23:50:04.328258] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1577729 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:15.428 23:50:04 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:17.389 23:50:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1577478 00:24:17.389 23:50:06 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Read completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 Write completed with error (sct=0, sc=8) 00:24:18.785 starting I/O failed 00:24:18.785 [2024-07-15 23:50:07.508023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:19.719 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1577478 Killed "${NVMF_APP[@]}" "$@" 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1578343 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1578343 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@823 -- # '[' -z 1578343 ']' 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:19.719 23:50:08 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.719 [2024-07-15 23:50:08.400644] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:19.719 [2024-07-15 23:50:08.400689] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.719 [2024-07-15 23:50:08.467760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.719 Read completed with error (sct=0, sc=8) 00:24:19.719 starting I/O failed 00:24:19.719 Write completed with error (sct=0, sc=8) 00:24:19.719 starting I/O failed 00:24:19.719 Write completed with error (sct=0, sc=8) 00:24:19.719 starting I/O failed 00:24:19.719 Write completed with error (sct=0, sc=8) 00:24:19.719 starting I/O failed 00:24:19.719 Write completed with error (sct=0, sc=8) 00:24:19.719 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Write completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 Read completed with error (sct=0, sc=8) 00:24:19.720 starting I/O failed 00:24:19.720 [2024-07-15 23:50:08.513126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:19.720 [2024-07-15 23:50:08.545555] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.720 [2024-07-15 23:50:08.545584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.720 [2024-07-15 23:50:08.545591] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.720 [2024-07-15 23:50:08.545597] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.720 [2024-07-15 23:50:08.545602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.720 [2024-07-15 23:50:08.545708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:19.720 [2024-07-15 23:50:08.545815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:19.720 [2024-07-15 23:50:08.545920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.720 [2024-07-15 23:50:08.545922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # return 0 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.285 Malloc0 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.285 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.542 [2024-07-15 23:50:09.289759] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xa1ecf0/0xa2a8c0) succeed. 00:24:20.542 [2024-07-15 23:50:09.299077] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xa20330/0xa6bf50) succeed. 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.542 [2024-07-15 23:50:09.442264] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:20.542 23:50:09 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1577729 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Write completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 Read completed with error (sct=0, sc=8) 00:24:20.542 starting I/O failed 00:24:20.542 [2024-07-15 23:50:09.518088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.800 [2024-07-15 23:50:09.526341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.800 [2024-07-15 23:50:09.526388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.800 [2024-07-15 23:50:09.526408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.800 [2024-07-15 23:50:09.526416] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.800 [2024-07-15 23:50:09.526423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.800 [2024-07-15 23:50:09.536591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.800 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.546272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.546315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.546331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.546338] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.546344] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.556797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.566354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.566395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.566410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.566417] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.566423] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.576725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.586427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.586467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.586482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.586489] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.586495] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.596694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.606388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.606433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.606448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.606454] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.606460] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.617007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.626485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.626523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.626542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.626550] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.626555] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.637046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.646535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.646576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.646590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.646599] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.646605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.656969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.666585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.666625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.666639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.666646] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.666652] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.676967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.686649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.686687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.686702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.686709] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.686714] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.697065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.706813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.706848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.706864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.706871] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.706876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.717106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.726662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.726703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.726718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.726725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.726730] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.737176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.746843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.746883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.746900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.746906] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.746912] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.757221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:20.801 [2024-07-15 23:50:09.766890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.801 [2024-07-15 23:50:09.766938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.801 [2024-07-15 23:50:09.766952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.801 [2024-07-15 23:50:09.766958] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.801 [2024-07-15 23:50:09.766964] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.801 [2024-07-15 23:50:09.777346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.801 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.787000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.787036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.787050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.787057] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.787063] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.797448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.807097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.807133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.807147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.807154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.807159] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.817582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.827113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.827151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.827168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.827175] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.827180] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.837233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.847136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.847177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.847192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.847198] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.847204] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.857567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.867257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.867300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.867315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.867322] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.867328] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.877440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.887215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.887256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.887270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.887277] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.887283] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.897639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.907286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.907325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.907341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.907348] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.907357] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.917643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.927398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.927441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.927456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.927462] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.927468] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.060 [2024-07-15 23:50:09.937547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.060 qpair failed and we were unable to recover it. 00:24:21.060 [2024-07-15 23:50:09.947463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.060 [2024-07-15 23:50:09.947502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.060 [2024-07-15 23:50:09.947517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.060 [2024-07-15 23:50:09.947523] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.060 [2024-07-15 23:50:09.947529] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.061 [2024-07-15 23:50:09.957741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.061 qpair failed and we were unable to recover it. 00:24:21.061 [2024-07-15 23:50:09.967416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.061 [2024-07-15 23:50:09.967453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.061 [2024-07-15 23:50:09.967469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.061 [2024-07-15 23:50:09.967475] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.061 [2024-07-15 23:50:09.967481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.061 [2024-07-15 23:50:09.977897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.061 qpair failed and we were unable to recover it. 00:24:21.061 [2024-07-15 23:50:09.987496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.061 [2024-07-15 23:50:09.987534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.061 [2024-07-15 23:50:09.987554] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.061 [2024-07-15 23:50:09.987561] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.061 [2024-07-15 23:50:09.987567] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.061 [2024-07-15 23:50:09.997955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.061 qpair failed and we were unable to recover it. 00:24:21.061 [2024-07-15 23:50:10.007613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.061 [2024-07-15 23:50:10.007660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.061 [2024-07-15 23:50:10.007677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.061 [2024-07-15 23:50:10.007684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.061 [2024-07-15 23:50:10.007691] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.061 [2024-07-15 23:50:10.017996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.061 qpair failed and we were unable to recover it. 00:24:21.061 [2024-07-15 23:50:10.027645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.061 [2024-07-15 23:50:10.027682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.061 [2024-07-15 23:50:10.027698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.061 [2024-07-15 23:50:10.027705] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.061 [2024-07-15 23:50:10.027711] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.061 [2024-07-15 23:50:10.037999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.061 qpair failed and we were unable to recover it. 00:24:21.319 [2024-07-15 23:50:10.047670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.319 [2024-07-15 23:50:10.047706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.319 [2024-07-15 23:50:10.047723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.319 [2024-07-15 23:50:10.047730] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.319 [2024-07-15 23:50:10.047736] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.319 [2024-07-15 23:50:10.057910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.319 qpair failed and we were unable to recover it. 00:24:21.319 [2024-07-15 23:50:10.067743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.319 [2024-07-15 23:50:10.067785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.319 [2024-07-15 23:50:10.067800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.319 [2024-07-15 23:50:10.067807] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.319 [2024-07-15 23:50:10.067813] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.319 [2024-07-15 23:50:10.078318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.319 qpair failed and we were unable to recover it. 00:24:21.319 [2024-07-15 23:50:10.087873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.319 [2024-07-15 23:50:10.087923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.319 [2024-07-15 23:50:10.087940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.087947] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.087952] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.098199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.107826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.107867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.107883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.107890] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.107896] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.118012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.127824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.127860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.127874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.127881] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.127886] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.138357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.147919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.147956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.147970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.147977] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.147982] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.158496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.167980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.168024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.168038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.168044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.168050] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.178455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.188155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.188196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.188210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.188217] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.188223] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.198460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.208138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.208178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.208192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.208199] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.208204] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.218628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.228270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.228310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.228323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.228330] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.228335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.238573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.248257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.248295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.248309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.248316] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.248321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.258748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.268288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.268330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.268347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.268353] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.268359] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.278779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.320 [2024-07-15 23:50:10.288337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.320 [2024-07-15 23:50:10.288373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.320 [2024-07-15 23:50:10.288388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.320 [2024-07-15 23:50:10.288394] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.320 [2024-07-15 23:50:10.288400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.320 [2024-07-15 23:50:10.298680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.320 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-15 23:50:10.308443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.578 [2024-07-15 23:50:10.308483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.578 [2024-07-15 23:50:10.308499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.578 [2024-07-15 23:50:10.308505] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.578 [2024-07-15 23:50:10.308511] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.578 [2024-07-15 23:50:10.318819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-15 23:50:10.328520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.578 [2024-07-15 23:50:10.328561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.578 [2024-07-15 23:50:10.328576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.578 [2024-07-15 23:50:10.328583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.578 [2024-07-15 23:50:10.328589] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.578 [2024-07-15 23:50:10.338868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.578 qpair failed and we were unable to recover it. 00:24:21.578 [2024-07-15 23:50:10.348604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.578 [2024-07-15 23:50:10.348644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.578 [2024-07-15 23:50:10.348658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.578 [2024-07-15 23:50:10.348665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.578 [2024-07-15 23:50:10.348673] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.359007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.368640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.368683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.368697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.368704] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.368710] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.379119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.388722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.388759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.388774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.388780] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.388786] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.399009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.408776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.408819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.408833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.408840] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.408846] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.419194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.428890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.428930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.428944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.428950] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.428955] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.439237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.448907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.448942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.448956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.448963] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.448968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.459384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.468936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.468974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.468987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.468994] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.468999] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.479359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.489050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.489089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.489102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.489109] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.489115] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.499303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.509159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.509197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.509212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.509218] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.509224] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.519516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.529070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.529101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.529117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.529124] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.529129] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.579 [2024-07-15 23:50:10.539467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.579 qpair failed and we were unable to recover it. 00:24:21.579 [2024-07-15 23:50:10.549118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.579 [2024-07-15 23:50:10.549153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.579 [2024-07-15 23:50:10.549167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.579 [2024-07-15 23:50:10.549173] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.579 [2024-07-15 23:50:10.549179] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.837 [2024-07-15 23:50:10.559741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.837 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.569256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.569301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.569314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.569321] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.569326] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.579772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.589262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.589297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.589312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.589319] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.589324] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.599710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.609274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.609313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.609327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.609334] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.609340] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.619764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.629432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.629471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.629485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.629492] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.629498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.639802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.649504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.649544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.649558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.649565] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.649571] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.659784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.669546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.669587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.669601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.669608] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.669614] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.679978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.689594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.689636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.689650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.689656] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.689662] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.700039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.709670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.709710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.709728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.709735] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.709740] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.720058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.729730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.729764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.729779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.729785] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.729791] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.740163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.749740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.749775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.749790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.749796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.749802] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.760072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.769770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.769802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.769815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.769822] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.769827] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.780223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.789936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.789974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.789988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.789995] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.790003] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.838 [2024-07-15 23:50:10.800226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.838 qpair failed and we were unable to recover it. 00:24:21.838 [2024-07-15 23:50:10.809957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.838 [2024-07-15 23:50:10.810004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.838 [2024-07-15 23:50:10.810021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.838 [2024-07-15 23:50:10.810028] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.838 [2024-07-15 23:50:10.810034] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.820391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.829986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.830022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.830037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.830043] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.830049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.840316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.850037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.850074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.850088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.850095] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.850101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.860457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.870043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.870081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.870096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.870102] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.870108] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.880536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.890309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.890352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.890366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.890373] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.890379] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.900623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.910393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.910430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.910445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.910451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.910457] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.920764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.930335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.096 [2024-07-15 23:50:10.930367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.096 [2024-07-15 23:50:10.930381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.096 [2024-07-15 23:50:10.930387] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.096 [2024-07-15 23:50:10.930393] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.096 [2024-07-15 23:50:10.940577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.096 qpair failed and we were unable to recover it. 00:24:22.096 [2024-07-15 23:50:10.950450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:10.950490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:10.950504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:10.950510] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:10.950516] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:10.960676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:10.970417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:10.970461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:10.970478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:10.970485] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:10.970491] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:10.980801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:10.990496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:10.990545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:10.990561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:10.990567] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:10.990573] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:11.001059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:11.010525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:11.010566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:11.010581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:11.010588] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:11.010594] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:11.020906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:11.030660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:11.030699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:11.030713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:11.030720] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:11.030726] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:11.041089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:11.050774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:11.050809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:11.050824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:11.050830] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:11.050836] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.097 [2024-07-15 23:50:11.061088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.097 qpair failed and we were unable to recover it. 00:24:22.097 [2024-07-15 23:50:11.070848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.097 [2024-07-15 23:50:11.070894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.097 [2024-07-15 23:50:11.070909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.097 [2024-07-15 23:50:11.070916] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.097 [2024-07-15 23:50:11.070922] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.081257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.090872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.090913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.090927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.090934] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.090940] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.101023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.110975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.111017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.111033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.111040] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.111045] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.121185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.131000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.131039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.131054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.131061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.131066] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.141489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.151097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.151138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.151155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.151161] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.151167] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.161230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.171125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.171164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.171178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.171185] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.171190] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.181453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.191200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.191239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.191254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.191260] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.191266] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.201584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.211279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.211319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.211333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.355 [2024-07-15 23:50:11.211340] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.355 [2024-07-15 23:50:11.211346] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.355 [2024-07-15 23:50:11.221699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.355 qpair failed and we were unable to recover it. 00:24:22.355 [2024-07-15 23:50:11.231303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.355 [2024-07-15 23:50:11.231340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.355 [2024-07-15 23:50:11.231353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.231360] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.231369] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.356 [2024-07-15 23:50:11.241598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.356 qpair failed and we were unable to recover it. 00:24:22.356 [2024-07-15 23:50:11.251311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.356 [2024-07-15 23:50:11.251350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.356 [2024-07-15 23:50:11.251364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.251371] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.251377] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.356 [2024-07-15 23:50:11.261776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.356 qpair failed and we were unable to recover it. 00:24:22.356 [2024-07-15 23:50:11.271369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.356 [2024-07-15 23:50:11.271410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.356 [2024-07-15 23:50:11.271424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.271431] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.271437] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.356 [2024-07-15 23:50:11.281794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.356 qpair failed and we were unable to recover it. 00:24:22.356 [2024-07-15 23:50:11.291329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.356 [2024-07-15 23:50:11.291368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.356 [2024-07-15 23:50:11.291382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.291388] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.291394] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.356 [2024-07-15 23:50:11.301663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.356 qpair failed and we were unable to recover it. 00:24:22.356 [2024-07-15 23:50:11.311370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.356 [2024-07-15 23:50:11.311411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.356 [2024-07-15 23:50:11.311426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.311432] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.311438] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.356 [2024-07-15 23:50:11.322000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.356 qpair failed and we were unable to recover it. 00:24:22.356 [2024-07-15 23:50:11.331560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.356 [2024-07-15 23:50:11.331593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.356 [2024-07-15 23:50:11.331607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.356 [2024-07-15 23:50:11.331614] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.356 [2024-07-15 23:50:11.331619] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.614 [2024-07-15 23:50:11.342022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.614 qpair failed and we were unable to recover it. 00:24:22.614 [2024-07-15 23:50:11.351639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.614 [2024-07-15 23:50:11.351680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.614 [2024-07-15 23:50:11.351693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.614 [2024-07-15 23:50:11.351700] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.614 [2024-07-15 23:50:11.351706] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.614 [2024-07-15 23:50:11.362003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.614 qpair failed and we were unable to recover it. 00:24:22.614 [2024-07-15 23:50:11.371623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.614 [2024-07-15 23:50:11.371662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.614 [2024-07-15 23:50:11.371677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.614 [2024-07-15 23:50:11.371684] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.614 [2024-07-15 23:50:11.371689] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.614 [2024-07-15 23:50:11.381966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.614 qpair failed and we were unable to recover it. 00:24:22.614 [2024-07-15 23:50:11.391698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.614 [2024-07-15 23:50:11.391735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.614 [2024-07-15 23:50:11.391749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.614 [2024-07-15 23:50:11.391756] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.614 [2024-07-15 23:50:11.391761] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.614 [2024-07-15 23:50:11.402145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.614 qpair failed and we were unable to recover it. 00:24:22.614 [2024-07-15 23:50:11.411740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.614 [2024-07-15 23:50:11.411782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.614 [2024-07-15 23:50:11.411799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.614 [2024-07-15 23:50:11.411806] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.614 [2024-07-15 23:50:11.411811] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.614 [2024-07-15 23:50:11.422184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.614 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.431846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.431885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.431899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.431906] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.431912] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.442273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.451854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.451891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.451906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.451912] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.451918] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.462275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.472020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.472059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.472073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.472080] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.472086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.482371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.492094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.492134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.492148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.492155] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.492160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.502381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.512098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.512137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.512152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.512158] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.512164] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.522577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.532102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.532137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.532151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.532158] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.532164] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.542611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.552196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.552237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.552251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.552257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.552262] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.562648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.572220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.572260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.572274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.572281] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.572286] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.615 [2024-07-15 23:50:11.582733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.615 qpair failed and we were unable to recover it. 00:24:22.615 [2024-07-15 23:50:11.592258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.615 [2024-07-15 23:50:11.592297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.615 [2024-07-15 23:50:11.592314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.615 [2024-07-15 23:50:11.592320] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.615 [2024-07-15 23:50:11.592326] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.602655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.612270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.612308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.612322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.612329] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.612335] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.622768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.632445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.632486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.632500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.632506] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.632512] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.642749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.652399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.652435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.652448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.652455] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.652461] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.662844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.672488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.672524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.672537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.672555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.672564] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.682973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.692532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.692579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.692594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.692600] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.692606] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.703030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.712715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.712750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.712766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.712773] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.712779] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.723290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.732691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.732728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.732743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.732750] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.732756] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.874 [2024-07-15 23:50:11.743191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.874 qpair failed and we were unable to recover it. 00:24:22.874 [2024-07-15 23:50:11.752802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.874 [2024-07-15 23:50:11.752840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.874 [2024-07-15 23:50:11.752854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.874 [2024-07-15 23:50:11.752860] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.874 [2024-07-15 23:50:11.752866] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.875 [2024-07-15 23:50:11.763212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.875 qpair failed and we were unable to recover it. 00:24:22.875 [2024-07-15 23:50:11.772839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.875 [2024-07-15 23:50:11.772882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.875 [2024-07-15 23:50:11.772896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.875 [2024-07-15 23:50:11.772902] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.875 [2024-07-15 23:50:11.772908] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.875 [2024-07-15 23:50:11.783199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.875 qpair failed and we were unable to recover it. 00:24:22.875 [2024-07-15 23:50:11.792953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.875 [2024-07-15 23:50:11.792993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.875 [2024-07-15 23:50:11.793007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.875 [2024-07-15 23:50:11.793014] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.875 [2024-07-15 23:50:11.793019] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.875 [2024-07-15 23:50:11.803415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.875 qpair failed and we were unable to recover it. 00:24:22.875 [2024-07-15 23:50:11.812987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.875 [2024-07-15 23:50:11.813027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.875 [2024-07-15 23:50:11.813043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.875 [2024-07-15 23:50:11.813050] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.875 [2024-07-15 23:50:11.813056] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.875 [2024-07-15 23:50:11.823328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.875 qpair failed and we were unable to recover it. 00:24:22.875 [2024-07-15 23:50:11.833114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.875 [2024-07-15 23:50:11.833154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.875 [2024-07-15 23:50:11.833168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.875 [2024-07-15 23:50:11.833174] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.875 [2024-07-15 23:50:11.833180] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.875 [2024-07-15 23:50:11.843481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.875 qpair failed and we were unable to recover it. 00:24:22.875 [2024-07-15 23:50:11.853127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.875 [2024-07-15 23:50:11.853173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.875 [2024-07-15 23:50:11.853190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.875 [2024-07-15 23:50:11.853197] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.875 [2024-07-15 23:50:11.853202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.863593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.873131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.873170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.873184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.873191] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.133 [2024-07-15 23:50:11.873196] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.883549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.893180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.893218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.893232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.893239] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.133 [2024-07-15 23:50:11.893245] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.903549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.913227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.913265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.913280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.913286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.133 [2024-07-15 23:50:11.913292] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.923417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.933251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.933289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.933303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.933309] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.133 [2024-07-15 23:50:11.933315] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.943676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.953213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.953248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.953262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.953269] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.133 [2024-07-15 23:50:11.953275] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.133 [2024-07-15 23:50:11.963555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.133 qpair failed and we were unable to recover it. 00:24:23.133 [2024-07-15 23:50:11.973342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.133 [2024-07-15 23:50:11.973379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.133 [2024-07-15 23:50:11.973393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.133 [2024-07-15 23:50:11.973399] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:11.973405] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:11.983649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:11.993441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:11.993478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:11.993493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:11.993499] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:11.993505] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.003865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.013408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.013451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.013466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.013472] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.013478] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.024006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.033513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.033556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.033573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.033580] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.033585] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.043945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.053554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.053588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.053603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.053609] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.053615] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.064053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.073662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.073704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.073718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.073725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.073731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.083945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.093728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.093766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.093780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.093786] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.093792] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.134 [2024-07-15 23:50:12.104190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.134 qpair failed and we were unable to recover it. 00:24:23.134 [2024-07-15 23:50:12.113776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.134 [2024-07-15 23:50:12.113817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.134 [2024-07-15 23:50:12.113832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.134 [2024-07-15 23:50:12.113839] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.134 [2024-07-15 23:50:12.113847] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.392 [2024-07-15 23:50:12.124087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.392 qpair failed and we were unable to recover it. 00:24:23.392 [2024-07-15 23:50:12.133857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.392 [2024-07-15 23:50:12.133895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.133909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.133916] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.133921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.144356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.153881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.153921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.153936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.153943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.153948] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.164308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.173971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.174007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.174021] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.174028] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.174034] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.184389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.193974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.194009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.194023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.194030] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.194035] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.204365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.214015] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.214053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.214067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.214074] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.214079] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.224501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.234156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.234195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.234209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.234215] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.234221] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.244545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.254196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.254235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.254249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.254256] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.254261] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.264646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.274250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.274290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.274303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.274310] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.274316] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.284740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.294310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.294347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.294363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.294370] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.294375] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.304660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.314406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.314448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.314462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.314469] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.314474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.324777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.334364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.334407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.334421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.334427] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.334433] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.344938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.393 [2024-07-15 23:50:12.354648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.393 [2024-07-15 23:50:12.354686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.393 [2024-07-15 23:50:12.354700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.393 [2024-07-15 23:50:12.354707] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.393 [2024-07-15 23:50:12.354712] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.393 [2024-07-15 23:50:12.364990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.393 qpair failed and we were unable to recover it. 00:24:23.651 [2024-07-15 23:50:12.374533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.651 [2024-07-15 23:50:12.374571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.651 [2024-07-15 23:50:12.374586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.651 [2024-07-15 23:50:12.374593] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.651 [2024-07-15 23:50:12.374598] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.651 [2024-07-15 23:50:12.384938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.651 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.394761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.394802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.394817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.394824] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.394829] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.405105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.414686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.414731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.414745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.414752] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.414758] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.425003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.434751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.434788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.434802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.434809] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.434814] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.445162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.454792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.454828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.454843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.454849] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.454855] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.465179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.474930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.474969] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.474985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.474992] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.474997] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.485438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.494914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.494954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.494968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.494975] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.494981] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.505233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.514916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.514957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.514972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.514978] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.514984] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.525493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.535000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.535039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.535053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.535060] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.535066] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.545470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.555142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.555181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.555195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.555202] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.555211] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.565586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.575132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.575175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.575189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.575196] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.575202] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.585503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.595221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.595261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.595276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.595282] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.595288] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.605688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.652 [2024-07-15 23:50:12.615354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.652 [2024-07-15 23:50:12.615390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.652 [2024-07-15 23:50:12.615404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.652 [2024-07-15 23:50:12.615411] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.652 [2024-07-15 23:50:12.615416] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.652 [2024-07-15 23:50:12.625851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.652 qpair failed and we were unable to recover it. 00:24:23.911 [2024-07-15 23:50:12.635489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.911 [2024-07-15 23:50:12.635528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.911 [2024-07-15 23:50:12.635549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.911 [2024-07-15 23:50:12.635555] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.911 [2024-07-15 23:50:12.635561] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.911 [2024-07-15 23:50:12.645689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.911 qpair failed and we were unable to recover it. 00:24:23.911 [2024-07-15 23:50:12.655482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.911 [2024-07-15 23:50:12.655525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.911 [2024-07-15 23:50:12.655545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.911 [2024-07-15 23:50:12.655553] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.911 [2024-07-15 23:50:12.655558] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.911 [2024-07-15 23:50:12.665689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.911 qpair failed and we were unable to recover it. 00:24:23.911 [2024-07-15 23:50:12.675465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.911 [2024-07-15 23:50:12.675505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.911 [2024-07-15 23:50:12.675519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.911 [2024-07-15 23:50:12.675526] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.911 [2024-07-15 23:50:12.675531] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.911 [2024-07-15 23:50:12.685895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.911 qpair failed and we were unable to recover it. 00:24:23.911 [2024-07-15 23:50:12.695498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.911 [2024-07-15 23:50:12.695534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.911 [2024-07-15 23:50:12.695560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.911 [2024-07-15 23:50:12.695567] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.695573] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.705962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.715632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.715670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.715687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.715694] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.715699] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.725860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.735789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.735826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.735843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.735849] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.735855] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.746059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.755858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.755894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.755908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.755915] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.755921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.765843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.775754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.775786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.775799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.775806] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.775811] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.786093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.795899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.795940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.795954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.795960] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.795966] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.806117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.815925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.815967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.815981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.815988] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.815994] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.826373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.836192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.836230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.836244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.836251] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.836256] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.846151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.856029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.856066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.856080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.856087] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.856093] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.866403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:23.912 [2024-07-15 23:50:12.876209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.912 [2024-07-15 23:50:12.876247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.912 [2024-07-15 23:50:12.876262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.912 [2024-07-15 23:50:12.876268] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.912 [2024-07-15 23:50:12.876274] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.912 [2024-07-15 23:50:12.886630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.912 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.896168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.896213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.896227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.896234] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.896240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:12.906514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.916263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.916306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.916325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.916331] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.916337] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:12.926577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.936419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.936457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.936471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.936477] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.936483] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:12.946618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.956377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.956417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.956431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.956437] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.956443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:12.966770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.976478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.976522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.976543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.976550] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.976556] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:12.986963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:12.996553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:12.996591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:12.996605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:12.996612] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:12.996620] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:13.006980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:13.016553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:13.016586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:13.016601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:13.016608] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:13.016613] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:13.027055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:13.036662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:13.036701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:13.036716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:13.036722] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:13.036728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:13.047100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:13.056823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:13.056859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:13.056874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:13.056881] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:13.056887] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:13.067097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:13.076769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.171 [2024-07-15 23:50:13.076807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.171 [2024-07-15 23:50:13.076821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.171 [2024-07-15 23:50:13.076827] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.171 [2024-07-15 23:50:13.076833] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.171 [2024-07-15 23:50:13.087291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.171 qpair failed and we were unable to recover it. 00:24:24.171 [2024-07-15 23:50:13.096805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.172 [2024-07-15 23:50:13.096844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.172 [2024-07-15 23:50:13.096858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.172 [2024-07-15 23:50:13.096864] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.172 [2024-07-15 23:50:13.096870] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.172 [2024-07-15 23:50:13.107271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.172 qpair failed and we were unable to recover it. 00:24:24.172 [2024-07-15 23:50:13.116934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.172 [2024-07-15 23:50:13.116973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.172 [2024-07-15 23:50:13.116988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.172 [2024-07-15 23:50:13.116994] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.172 [2024-07-15 23:50:13.117000] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.172 [2024-07-15 23:50:13.127361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.172 qpair failed and we were unable to recover it. 00:24:24.172 [2024-07-15 23:50:13.137021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.172 [2024-07-15 23:50:13.137061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.172 [2024-07-15 23:50:13.137075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.172 [2024-07-15 23:50:13.137081] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.172 [2024-07-15 23:50:13.137086] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.172 [2024-07-15 23:50:13.147504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.172 qpair failed and we were unable to recover it. 00:24:24.430 [2024-07-15 23:50:13.157026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.430 [2024-07-15 23:50:13.157059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.430 [2024-07-15 23:50:13.157073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.430 [2024-07-15 23:50:13.157079] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.157085] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.167447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.177005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.177044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.177061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.177068] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.177073] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.187560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.197183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.197223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.197237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.197243] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.197249] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.207507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.217195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.217237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.217251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.217258] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.217263] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.227588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.237188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.237228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.237242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.237249] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.237255] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.247754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.257245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.257285] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.257299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.257306] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.257312] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.267647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.277354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.277391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.277405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.277411] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.277417] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.287860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.297436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.297476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.297490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.297496] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.297502] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.307944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.317401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.317439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.317454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.317460] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.317466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.328050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.337490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.337527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.337545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.337552] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.337558] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.347887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.357596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.357633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.357650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.357656] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.357661] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.368111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.377615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.377651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.377665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.377672] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.431 [2024-07-15 23:50:13.377677] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.431 [2024-07-15 23:50:13.388073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.431 qpair failed and we were unable to recover it. 00:24:24.431 [2024-07-15 23:50:13.397777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.431 [2024-07-15 23:50:13.397811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.431 [2024-07-15 23:50:13.397825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.431 [2024-07-15 23:50:13.397832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.432 [2024-07-15 23:50:13.397837] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.432 [2024-07-15 23:50:13.408135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.432 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.417727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.417760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.417775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.417781] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.417787] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.428284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.438003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.438039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.438053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.438059] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.438067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.448238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.457921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.457961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.457974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.457981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.457987] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.468401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.477961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.477996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.478011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.478017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.478023] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.488314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.498041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.498079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.498092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.498099] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.498105] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.508482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.518080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.518118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.518132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.518138] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.518144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.528544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.538135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.538174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.538188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.538194] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.538200] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.548422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.558178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.558214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.558228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.558234] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.558240] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.568597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.578237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.578267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.578280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.690 [2024-07-15 23:50:13.578287] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.690 [2024-07-15 23:50:13.578293] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.690 [2024-07-15 23:50:13.588749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.690 qpair failed and we were unable to recover it. 00:24:24.690 [2024-07-15 23:50:13.598257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.690 [2024-07-15 23:50:13.598295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.690 [2024-07-15 23:50:13.598309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.691 [2024-07-15 23:50:13.598315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.691 [2024-07-15 23:50:13.598321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.691 [2024-07-15 23:50:13.608758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.691 qpair failed and we were unable to recover it. 00:24:24.691 [2024-07-15 23:50:13.618405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.691 [2024-07-15 23:50:13.618451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.691 [2024-07-15 23:50:13.618468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.691 [2024-07-15 23:50:13.618475] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.691 [2024-07-15 23:50:13.618480] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.691 [2024-07-15 23:50:13.628811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.691 qpair failed and we were unable to recover it. 00:24:24.691 [2024-07-15 23:50:13.638302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.691 [2024-07-15 23:50:13.638344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.691 [2024-07-15 23:50:13.638357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.691 [2024-07-15 23:50:13.638364] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.691 [2024-07-15 23:50:13.638369] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.691 [2024-07-15 23:50:13.648719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.691 qpair failed and we were unable to recover it. 00:24:24.691 [2024-07-15 23:50:13.658447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.691 [2024-07-15 23:50:13.658483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.691 [2024-07-15 23:50:13.658496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.691 [2024-07-15 23:50:13.658503] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.691 [2024-07-15 23:50:13.658508] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.691 [2024-07-15 23:50:13.668975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.691 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.678668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.678705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.678719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.678725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.678731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.688934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.698743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.698781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.698794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.698800] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.698806] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.709133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.718580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.718613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.718629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.718636] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.718642] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.729100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.738676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.738713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.738727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.738734] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.738739] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.749195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.758734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.758772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.758785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.758791] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.758797] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.769130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.778865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.778905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.778919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.949 [2024-07-15 23:50:13.778925] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.949 [2024-07-15 23:50:13.778931] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.949 [2024-07-15 23:50:13.789249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.949 qpair failed and we were unable to recover it. 00:24:24.949 [2024-07-15 23:50:13.798909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.949 [2024-07-15 23:50:13.798945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.949 [2024-07-15 23:50:13.798963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.798970] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.798975] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.809304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.818866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.818908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.818922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.818928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.818934] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.829333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.839083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.839121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.839136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.839142] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.839148] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.849457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.859056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.859091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.859105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.859112] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.859117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.869504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.879147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.879179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.879193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.879199] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.879208] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.889551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.899214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.899255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.899270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.899277] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.899282] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.909449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:24.950 [2024-07-15 23:50:13.919281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:24.950 [2024-07-15 23:50:13.919319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:24.950 [2024-07-15 23:50:13.919333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:24.950 [2024-07-15 23:50:13.919339] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:24.950 [2024-07-15 23:50:13.919345] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:24.950 [2024-07-15 23:50:13.929580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:24.950 qpair failed and we were unable to recover it. 00:24:25.207 [2024-07-15 23:50:13.939231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.207 [2024-07-15 23:50:13.939271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.207 [2024-07-15 23:50:13.939285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.207 [2024-07-15 23:50:13.939292] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.207 [2024-07-15 23:50:13.939297] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.207 [2024-07-15 23:50:13.949684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.207 qpair failed and we were unable to recover it. 00:24:25.207 [2024-07-15 23:50:13.959260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.207 [2024-07-15 23:50:13.959293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.207 [2024-07-15 23:50:13.959307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.207 [2024-07-15 23:50:13.959314] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.207 [2024-07-15 23:50:13.959320] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.207 [2024-07-15 23:50:13.969565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.207 qpair failed and we were unable to recover it. 00:24:25.207 [2024-07-15 23:50:13.979274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.207 [2024-07-15 23:50:13.979307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.207 [2024-07-15 23:50:13.979321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.207 [2024-07-15 23:50:13.979327] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.207 [2024-07-15 23:50:13.979333] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.207 [2024-07-15 23:50:13.989720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.207 qpair failed and we were unable to recover it. 00:24:25.207 [2024-07-15 23:50:13.999408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.207 [2024-07-15 23:50:13.999448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.207 [2024-07-15 23:50:13.999462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.207 [2024-07-15 23:50:13.999469] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:13.999474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.009717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.019393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.019429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.019444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.019450] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.019456] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.029928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.039559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.039598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.039612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.039619] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.039625] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.049983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.059604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.059644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.059661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.059668] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.059673] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.069893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.079573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.079609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.079623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.079630] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.079636] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.089881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.099731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.099777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.099792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.099798] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.099804] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.109960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.119757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.119793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.119808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.119814] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.119820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.130219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.139819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.139858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.139872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.139879] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.139885] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.150148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.159847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.159886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.159900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.159907] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.159912] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.208 [2024-07-15 23:50:14.170050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.208 qpair failed and we were unable to recover it. 00:24:25.208 [2024-07-15 23:50:14.179940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.208 [2024-07-15 23:50:14.179983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.208 [2024-07-15 23:50:14.179997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.208 [2024-07-15 23:50:14.180003] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.208 [2024-07-15 23:50:14.180009] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.190295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.199976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.200016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.200029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.200036] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.200042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.210382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.220035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.220071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.220085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.220091] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.220097] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.230483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.240146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.240181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.240198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.240204] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.240210] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.250498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.260035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.260072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.260086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.260093] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.260099] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.270667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.280391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.280430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.280444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.280451] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.280456] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.290662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.300269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.300302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.300317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.300323] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.300328] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.310704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.320370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.320405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.320420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.320428] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.320436] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.330728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.340441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.340480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.340494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.340501] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.340506] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.350836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.360577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.360610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.360625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.360631] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.360637] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.370915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.380644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.380681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.380695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.380702] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.380707] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.466 [2024-07-15 23:50:14.390860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.466 qpair failed and we were unable to recover it. 00:24:25.466 [2024-07-15 23:50:14.400695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.466 [2024-07-15 23:50:14.400735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.466 [2024-07-15 23:50:14.400749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.466 [2024-07-15 23:50:14.400756] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.466 [2024-07-15 23:50:14.400761] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.467 [2024-07-15 23:50:14.411068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.467 qpair failed and we were unable to recover it. 00:24:25.467 [2024-07-15 23:50:14.420749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.467 [2024-07-15 23:50:14.420794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.467 [2024-07-15 23:50:14.420808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.467 [2024-07-15 23:50:14.420815] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.467 [2024-07-15 23:50:14.420820] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.467 [2024-07-15 23:50:14.431164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.467 qpair failed and we were unable to recover it. 00:24:25.467 [2024-07-15 23:50:14.440797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.467 [2024-07-15 23:50:14.440830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.467 [2024-07-15 23:50:14.440844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.467 [2024-07-15 23:50:14.440851] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.467 [2024-07-15 23:50:14.440857] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.451164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.460921] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.460960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.460974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.460981] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.460986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.471326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.480960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.480997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.481011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.481018] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.481024] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.491439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.500980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.501020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.501037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.501044] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.501049] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.511062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.521000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.521040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.521055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.521061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.521067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.531281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.541082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.541120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.541134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.541141] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.541147] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.551424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:25.725 [2024-07-15 23:50:14.561228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.725 [2024-07-15 23:50:14.561266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.725 [2024-07-15 23:50:14.561280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.725 [2024-07-15 23:50:14.561286] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.725 [2024-07-15 23:50:14.561292] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:25.725 [2024-07-15 23:50:14.571557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:25.725 qpair failed and we were unable to recover it. 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Write completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.657 starting I/O failed 00:24:26.657 Read completed with error (sct=0, sc=8) 00:24:26.658 starting I/O failed 00:24:26.658 Write completed with error (sct=0, sc=8) 00:24:26.658 starting I/O failed 00:24:26.658 Read completed with error (sct=0, sc=8) 00:24:26.658 starting I/O failed 00:24:26.658 Read completed with error (sct=0, sc=8) 00:24:26.658 starting I/O failed 00:24:26.658 Write completed with error (sct=0, sc=8) 00:24:26.658 starting I/O failed 00:24:26.658 [2024-07-15 23:50:15.576656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:26.658 [2024-07-15 23:50:15.584028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.658 [2024-07-15 23:50:15.584075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.658 [2024-07-15 23:50:15.584091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.658 [2024-07-15 23:50:15.584099] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.658 [2024-07-15 23:50:15.584105] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:24:26.658 [2024-07-15 23:50:15.594766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:26.658 qpair failed and we were unable to recover it. 00:24:26.658 [2024-07-15 23:50:15.604288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.658 [2024-07-15 23:50:15.604330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.658 [2024-07-15 23:50:15.604345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.658 [2024-07-15 23:50:15.604351] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.658 [2024-07-15 23:50:15.604357] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:24:26.658 [2024-07-15 23:50:15.614502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:26.658 qpair failed and we were unable to recover it. 00:24:26.658 [2024-07-15 23:50:15.624388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.658 [2024-07-15 23:50:15.624420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.658 [2024-07-15 23:50:15.624439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.658 [2024-07-15 23:50:15.624447] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.658 [2024-07-15 23:50:15.624453] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:26.658 [2024-07-15 23:50:15.634923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:26.658 qpair failed and we were unable to recover it. 00:24:26.915 [2024-07-15 23:50:15.644368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.915 [2024-07-15 23:50:15.644410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.915 [2024-07-15 23:50:15.644429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.915 [2024-07-15 23:50:15.644439] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.915 [2024-07-15 23:50:15.644447] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:26.915 [2024-07-15 23:50:15.654992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:26.915 qpair failed and we were unable to recover it. 00:24:26.915 [2024-07-15 23:50:15.655116] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:26.915 A controller has encountered a failure and is being reset. 00:24:26.915 [2024-07-15 23:50:15.664608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.915 [2024-07-15 23:50:15.664659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.915 [2024-07-15 23:50:15.664685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.915 [2024-07-15 23:50:15.664697] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.916 [2024-07-15 23:50:15.664706] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.916 [2024-07-15 23:50:15.674936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.916 qpair failed and we were unable to recover it. 00:24:26.916 [2024-07-15 23:50:15.684414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:26.916 [2024-07-15 23:50:15.684447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:26.916 [2024-07-15 23:50:15.684462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:26.916 [2024-07-15 23:50:15.684468] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:26.916 [2024-07-15 23:50:15.684474] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:26.916 [2024-07-15 23:50:15.695053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:26.916 qpair failed and we were unable to recover it. 00:24:26.916 [2024-07-15 23:50:15.695219] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:26.916 [2024-07-15 23:50:15.728809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:26.916 Controller properly reset. 00:24:26.916 Initializing NVMe Controllers 00:24:26.916 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.916 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.916 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:26.916 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:26.916 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:26.916 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:26.916 Initialization complete. Launching workers. 00:24:26.916 Starting thread on core 1 00:24:26.916 Starting thread on core 2 00:24:26.916 Starting thread on core 3 00:24:26.916 Starting thread on core 0 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:26.916 00:24:26.916 real 0m12.553s 00:24:26.916 user 0m28.269s 00:24:26.916 sys 0m2.019s 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:26.916 ************************************ 00:24:26.916 END TEST nvmf_target_disconnect_tc2 00:24:26.916 ************************************ 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1093 -- # '[' 2 -le 1 ']' 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:26.916 ************************************ 00:24:26.916 START TEST nvmf_target_disconnect_tc3 00:24:26.916 ************************************ 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1117 -- # nvmf_target_disconnect_tc3 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=1579608 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:26.916 23:50:15 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:29.457 23:50:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 1578343 00:24:29.457 23:50:17 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Read completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 Write completed with error (sct=0, sc=8) 00:24:30.398 starting I/O failed 00:24:30.398 [2024-07-15 23:50:19.021174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:30.965 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 1578343 Killed "${NVMF_APP[@]}" "$@" 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1580269 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1580269 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@823 -- # '[' -z 1580269 ']' 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:30.965 23:50:19 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.965 [2024-07-15 23:50:19.913862] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:30.965 [2024-07-15 23:50:19.913912] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.224 [2024-07-15 23:50:19.981212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Write completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 Read completed with error (sct=0, sc=8) 00:24:31.224 starting I/O failed 00:24:31.224 [2024-07-15 23:50:20.026178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.224 [2024-07-15 23:50:20.061370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.224 [2024-07-15 23:50:20.061405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.224 [2024-07-15 23:50:20.061412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.224 [2024-07-15 23:50:20.061418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.224 [2024-07-15 23:50:20.061423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.224 [2024-07-15 23:50:20.061567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:31.224 [2024-07-15 23:50:20.061675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:31.224 [2024-07-15 23:50:20.061780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:31.224 [2024-07-15 23:50:20.061781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@856 -- # return 0 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.792 Malloc0 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:31.792 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.051 [2024-07-15 23:50:20.793526] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2389cf0/0x23958c0) succeed. 00:24:32.051 [2024-07-15 23:50:20.802871] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x238b330/0x23d6f50) succeed. 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.051 [2024-07-15 23:50:20.945508] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@553 -- # xtrace_disable 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@581 -- # [[ 0 == 0 ]] 00:24:32.051 23:50:20 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 1579608 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Read completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 Write completed with error (sct=0, sc=8) 00:24:32.051 starting I/O failed 00:24:32.051 [2024-07-15 23:50:21.031071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.309 [2024-07-15 23:50:21.032656] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:32.310 [2024-07-15 23:50:21.032673] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:32.310 [2024-07-15 23:50:21.032680] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.245 [2024-07-15 23:50:22.036541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.245 qpair failed and we were unable to recover it. 00:24:33.245 [2024-07-15 23:50:22.037984] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:33.245 [2024-07-15 23:50:22.037998] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:33.245 [2024-07-15 23:50:22.038004] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.180 [2024-07-15 23:50:23.041851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.180 qpair failed and we were unable to recover it. 00:24:34.180 [2024-07-15 23:50:23.043209] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:34.180 [2024-07-15 23:50:23.043224] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:34.180 [2024-07-15 23:50:23.043230] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.113 [2024-07-15 23:50:24.046903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.113 qpair failed and we were unable to recover it. 00:24:35.113 [2024-07-15 23:50:24.048287] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:35.113 [2024-07-15 23:50:24.048301] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:35.113 [2024-07-15 23:50:24.048307] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.490 [2024-07-15 23:50:25.052270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.490 qpair failed and we were unable to recover it. 00:24:36.490 [2024-07-15 23:50:25.053878] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:36.490 [2024-07-15 23:50:25.053893] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:36.490 [2024-07-15 23:50:25.053898] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:37.424 [2024-07-15 23:50:26.057665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.424 qpair failed and we were unable to recover it. 00:24:37.424 [2024-07-15 23:50:26.059107] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:37.424 [2024-07-15 23:50:26.059121] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:37.424 [2024-07-15 23:50:26.059127] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:38.359 [2024-07-15 23:50:27.062953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:38.359 qpair failed and we were unable to recover it. 00:24:38.359 [2024-07-15 23:50:27.064360] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:38.359 [2024-07-15 23:50:27.064378] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:38.359 [2024-07-15 23:50:27.064384] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:39.294 [2024-07-15 23:50:28.068093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:39.294 qpair failed and we were unable to recover it. 00:24:39.294 [2024-07-15 23:50:28.069672] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:39.294 [2024-07-15 23:50:28.069693] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:39.294 [2024-07-15 23:50:28.069700] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:40.228 [2024-07-15 23:50:29.073669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:40.228 qpair failed and we were unable to recover it. 00:24:40.228 [2024-07-15 23:50:29.075202] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:40.228 [2024-07-15 23:50:29.075216] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:40.228 [2024-07-15 23:50:29.075222] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:41.163 [2024-07-15 23:50:30.078926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.163 qpair failed and we were unable to recover it. 00:24:41.163 [2024-07-15 23:50:30.079058] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:41.163 A controller has encountered a failure and is being reset. 00:24:41.163 Resorting to new failover address 192.168.100.9 00:24:41.164 [2024-07-15 23:50:30.080856] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:41.164 [2024-07-15 23:50:30.080883] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:41.164 [2024-07-15 23:50:30.080894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:42.540 [2024-07-15 23:50:31.084736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:42.540 qpair failed and we were unable to recover it. 00:24:42.540 [2024-07-15 23:50:31.086166] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:42.540 [2024-07-15 23:50:31.086181] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:42.540 [2024-07-15 23:50:31.086187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:43.474 [2024-07-15 23:50:32.090023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:43.474 qpair failed and we were unable to recover it. 00:24:43.474 [2024-07-15 23:50:32.090116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.474 [2024-07-15 23:50:32.090223] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:43.474 [2024-07-15 23:50:32.092008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:43.474 Controller properly reset. 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Read completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 Write completed with error (sct=0, sc=8) 00:24:44.411 starting I/O failed 00:24:44.411 [2024-07-15 23:50:33.137847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:44.411 Initializing NVMe Controllers 00:24:44.411 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.411 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.411 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:44.411 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:44.411 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:44.411 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:44.411 Initialization complete. Launching workers. 00:24:44.411 Starting thread on core 1 00:24:44.411 Starting thread on core 2 00:24:44.411 Starting thread on core 3 00:24:44.411 Starting thread on core 0 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:44.411 00:24:44.411 real 0m17.335s 00:24:44.411 user 1m1.646s 00:24:44.411 sys 0m3.463s 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:44.411 ************************************ 00:24:44.411 END TEST nvmf_target_disconnect_tc3 00:24:44.411 ************************************ 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1136 -- # return 0 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:44.411 rmmod nvme_rdma 00:24:44.411 rmmod nvme_fabrics 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1580269 ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1580269 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@942 -- # '[' -z 1580269 ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # kill -0 1580269 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # uname 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1580269 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # process_name=reactor_4 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' reactor_4 = sudo ']' 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1580269' 00:24:44.411 killing process with pid 1580269 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@961 -- # kill 1580269 00:24:44.411 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # wait 1580269 00:24:44.671 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:44.671 23:50:33 nvmf_rdma.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:44.671 00:24:44.671 real 0m37.010s 00:24:44.671 user 2m26.171s 00:24:44.671 sys 0m10.043s 00:24:44.671 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:44.671 23:50:33 nvmf_rdma.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:44.671 ************************************ 00:24:44.671 END TEST nvmf_target_disconnect 00:24:44.671 ************************************ 00:24:44.671 23:50:33 nvmf_rdma -- common/autotest_common.sh@1136 -- # return 0 00:24:44.671 23:50:33 nvmf_rdma -- nvmf/nvmf.sh@126 -- # timing_exit host 00:24:44.671 23:50:33 nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:44.671 23:50:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:44.930 23:50:33 nvmf_rdma -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:24:44.930 00:24:44.930 real 17m13.576s 00:24:44.930 user 43m31.320s 00:24:44.930 sys 4m8.065s 00:24:44.930 23:50:33 nvmf_rdma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:24:44.930 23:50:33 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:44.930 ************************************ 00:24:44.930 END TEST nvmf_rdma 00:24:44.930 ************************************ 00:24:44.930 23:50:33 -- common/autotest_common.sh@1136 -- # return 0 00:24:44.930 23:50:33 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:44.930 23:50:33 -- common/autotest_common.sh@1093 -- # '[' 3 -le 1 ']' 00:24:44.930 23:50:33 -- common/autotest_common.sh@1099 -- # xtrace_disable 00:24:44.930 23:50:33 -- common/autotest_common.sh@10 -- # set +x 00:24:44.930 ************************************ 00:24:44.930 START TEST spdkcli_nvmf_rdma 00:24:44.930 ************************************ 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@1117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:44.930 * Looking for test storage... 00:24:44.930 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1582522 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 1582522 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@823 -- # '[' -z 1582522 ']' 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@827 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@828 -- # local max_retries=100 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@830 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@832 -- # xtrace_disable 00:24:44.930 23:50:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:44.930 [2024-07-15 23:50:33.902975] Starting SPDK v24.09-pre git sha1 00bf4c571 / DPDK 24.03.0 initialization... 00:24:44.930 [2024-07-15 23:50:33.903024] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1582522 ] 00:24:45.189 [2024-07-15 23:50:33.958266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:45.189 [2024-07-15 23:50:34.038740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.189 [2024-07-15 23:50:34.038745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.755 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@852 -- # (( i == 0 )) 00:24:45.755 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@856 -- # return 0 00:24:45.755 23:50:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:45.755 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.755 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:45.756 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:46.015 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:46.015 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:46.015 23:50:34 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:24:46.015 23:50:34 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:51.377 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:51.377 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:51.377 Found net devices under 0000:da:00.0: mlx_0_0 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:51.377 Found net devices under 0000:da:00.1: mlx_0_1 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:51.377 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 23:50:39 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:51.378 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.378 link/ether ec:0d:9a:8b:2b:7c brd ff:ff:ff:ff:ff:ff 00:24:51.378 altname enp218s0f0np0 00:24:51.378 altname ens818f0np0 00:24:51.378 inet 192.168.100.8/24 scope global mlx_0_0 00:24:51.378 valid_lft forever preferred_lft forever 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:51.378 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:51.378 link/ether ec:0d:9a:8b:2b:7d brd ff:ff:ff:ff:ff:ff 00:24:51.378 altname enp218s0f1np1 00:24:51.378 altname ens818f1np1 00:24:51.378 inet 192.168.100.9/24 scope global mlx_0_1 00:24:51.378 valid_lft forever preferred_lft forever 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:51.378 192.168.100.9' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:51.378 23:50:40 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:51.379 23:50:40 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:51.379 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:51.379 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:51.379 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:51.379 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:51.379 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:51.379 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:51.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:51.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:51.379 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:51.379 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:51.379 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:51.379 ' 00:24:53.936 [2024-07-15 23:50:42.518660] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eb3cf0/0x1d3a600) succeed. 00:24:53.936 [2024-07-15 23:50:42.528189] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eb5380/0x1e256c0) succeed. 00:24:54.870 [2024-07-15 23:50:43.756255] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:57.398 [2024-07-15 23:50:45.919027] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:59.297 [2024-07-15 23:50:47.777076] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:00.232 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:00.232 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:00.232 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:00.232 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:00.232 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:00.232 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:00.232 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:25:00.491 23:50:49 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:00.749 23:50:49 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:01.008 23:50:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:01.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:01.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:01.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:01.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:25:01.008 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:25:01.008 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:01.008 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:01.008 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:01.008 ' 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:06.276 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:06.276 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:06.276 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:06.276 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 1582522 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@942 -- # '[' -z 1582522 ']' 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@946 -- # kill -0 1582522 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@947 -- # uname 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@947 -- # '[' Linux = Linux ']' 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # ps --no-headers -o comm= 1582522 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@948 -- # process_name=reactor_0 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@952 -- # '[' reactor_0 = sudo ']' 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # echo 'killing process with pid 1582522' 00:25:06.276 killing process with pid 1582522 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@961 -- # kill 1582522 00:25:06.276 23:50:54 spdkcli_nvmf_rdma -- common/autotest_common.sh@966 -- # wait 1582522 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:06.276 rmmod nvme_rdma 00:25:06.276 rmmod nvme_fabrics 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:06.276 00:25:06.276 real 0m21.341s 00:25:06.276 user 0m45.146s 00:25:06.276 sys 0m4.768s 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@1118 -- # xtrace_disable 00:25:06.276 23:50:55 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:06.276 ************************************ 00:25:06.276 END TEST spdkcli_nvmf_rdma 00:25:06.276 ************************************ 00:25:06.276 23:50:55 -- common/autotest_common.sh@1136 -- # return 0 00:25:06.276 23:50:55 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:06.276 23:50:55 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:25:06.276 23:50:55 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:06.276 23:50:55 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:06.276 23:50:55 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:06.276 23:50:55 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:25:06.276 23:50:55 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:25:06.276 23:50:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:06.276 23:50:55 -- common/autotest_common.sh@10 -- # set +x 00:25:06.276 23:50:55 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:25:06.276 23:50:55 -- common/autotest_common.sh@1386 -- # local autotest_es=0 00:25:06.276 23:50:55 -- common/autotest_common.sh@1387 -- # xtrace_disable 00:25:06.276 23:50:55 -- common/autotest_common.sh@10 -- # set +x 00:25:10.451 INFO: APP EXITING 00:25:10.451 INFO: killing all VMs 00:25:10.451 INFO: killing vhost app 00:25:10.451 INFO: EXIT DONE 00:25:12.982 Waiting for block devices as requested 00:25:12.982 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.982 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.241 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.241 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.241 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.241 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.500 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.500 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.500 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:13.759 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.759 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.759 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.759 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:14.017 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:14.017 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:14.017 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:14.276 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:16.810 Cleaning 00:25:16.810 Removing: /var/run/dpdk/spdk0/config 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:16.810 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:16.810 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:16.810 Removing: /var/run/dpdk/spdk1/config 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:16.810 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:16.810 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:16.810 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:16.810 Removing: /var/run/dpdk/spdk2/config 00:25:16.810 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:16.810 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:16.810 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:16.810 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:16.810 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:17.069 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:17.069 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:17.069 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:17.069 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:17.069 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:17.069 Removing: /var/run/dpdk/spdk3/config 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:17.069 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:17.069 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:17.069 Removing: /var/run/dpdk/spdk4/config 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:17.069 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:17.069 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:17.069 Removing: /dev/shm/bdevperf_trace.pid1396325 00:25:17.069 Removing: /dev/shm/bdevperf_trace.pid1502456 00:25:17.069 Removing: /dev/shm/bdev_svc_trace.1 00:25:17.069 Removing: /dev/shm/nvmf_trace.0 00:25:17.069 Removing: /dev/shm/spdk_tgt_trace.pid1288963 00:25:17.069 Removing: /var/run/dpdk/spdk0 00:25:17.069 Removing: /var/run/dpdk/spdk1 00:25:17.069 Removing: /var/run/dpdk/spdk2 00:25:17.069 Removing: /var/run/dpdk/spdk3 00:25:17.069 Removing: /var/run/dpdk/spdk4 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1286618 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1287681 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1288963 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1289592 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1290533 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1290782 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1291750 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1291859 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1292110 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1296800 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1298105 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1298390 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1298677 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1298981 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1299277 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1299526 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1299772 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1300046 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1301015 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1303949 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1304200 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1304531 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1304541 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1305037 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1305237 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1305537 00:25:17.069 Removing: /var/run/dpdk/spdk_pid1305767 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1306023 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1306159 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1306306 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1306533 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307038 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307235 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307568 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307824 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307919 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1307987 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1308232 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1308490 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1308739 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1308986 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1309237 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1309484 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1309732 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1309983 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1310268 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1310599 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1310860 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1311106 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1311359 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1311608 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1312071 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1312502 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1312769 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1313053 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1313318 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1313598 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1313784 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1314088 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1317757 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1358112 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1362110 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1372615 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1377772 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1381081 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1381784 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1388472 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1396325 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1396577 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1400403 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1405994 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1408688 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1418827 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1441896 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1445187 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1500522 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1501367 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1502456 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1506426 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1513172 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1514097 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1514966 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1515864 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1516172 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1520385 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1520391 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1524636 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1525103 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1525778 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1526470 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1526478 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1530951 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1531525 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1535553 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1538198 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1543408 00:25:17.328 Removing: /var/run/dpdk/spdk_pid1553327 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1553332 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1571390 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1571629 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1577231 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1577729 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1579608 00:25:17.588 Removing: /var/run/dpdk/spdk_pid1582522 00:25:17.588 Clean 00:25:17.588 23:51:06 -- common/autotest_common.sh@1445 -- # return 0 00:25:17.588 23:51:06 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:25:17.588 23:51:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.588 23:51:06 -- common/autotest_common.sh@10 -- # set +x 00:25:17.588 23:51:06 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:25:17.588 23:51:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:17.588 23:51:06 -- common/autotest_common.sh@10 -- # set +x 00:25:17.588 23:51:06 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:17.588 23:51:06 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:17.588 23:51:06 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:17.588 23:51:06 -- spdk/autotest.sh@391 -- # hash lcov 00:25:17.588 23:51:06 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:17.588 23:51:06 -- spdk/autotest.sh@393 -- # hostname 00:25:17.588 23:51:06 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:17.846 geninfo: WARNING: invalid characters removed from testname! 00:25:39.778 23:51:24 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:39.778 23:51:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:40.346 23:51:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:42.252 23:51:30 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:43.631 23:51:32 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:45.533 23:51:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:46.910 23:51:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:46.910 23:51:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:46.910 23:51:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:46.910 23:51:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.910 23:51:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.910 23:51:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.910 23:51:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.910 23:51:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.910 23:51:35 -- paths/export.sh@5 -- $ export PATH 00:25:46.910 23:51:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.910 23:51:35 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:46.910 23:51:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:25:46.910 23:51:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721080295.XXXXXX 00:25:46.910 23:51:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721080295.wf2DX4 00:25:46.910 23:51:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:25:46.910 23:51:35 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:25:46.910 23:51:35 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:46.910 23:51:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:46.910 23:51:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:46.910 23:51:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:25:46.910 23:51:35 -- common/autotest_common.sh@390 -- $ xtrace_disable 00:25:46.910 23:51:35 -- common/autotest_common.sh@10 -- $ set +x 00:25:46.910 23:51:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:46.910 23:51:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:25:46.910 23:51:35 -- pm/common@17 -- $ local monitor 00:25:46.910 23:51:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.910 23:51:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.910 23:51:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.910 23:51:35 -- pm/common@21 -- $ date +%s 00:25:46.910 23:51:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.910 23:51:35 -- pm/common@21 -- $ date +%s 00:25:46.910 23:51:35 -- pm/common@25 -- $ sleep 1 00:25:46.910 23:51:35 -- pm/common@21 -- $ date +%s 00:25:46.910 23:51:35 -- pm/common@21 -- $ date +%s 00:25:46.910 23:51:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080295 00:25:46.910 23:51:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080295 00:25:46.910 23:51:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080295 00:25:46.910 23:51:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721080295 00:25:47.169 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080295_collect-cpu-load.pm.log 00:25:47.169 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080295_collect-vmstat.pm.log 00:25:47.169 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080295_collect-cpu-temp.pm.log 00:25:47.169 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721080295_collect-bmc-pm.bmc.pm.log 00:25:48.107 23:51:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:25:48.107 23:51:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:25:48.107 23:51:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:48.107 23:51:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:48.107 23:51:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:48.107 23:51:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:48.107 23:51:36 -- common/autotest_common.sh@728 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:48.107 23:51:36 -- common/autotest_common.sh@729 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:48.107 23:51:36 -- common/autotest_common.sh@731 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:48.107 23:51:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:48.107 23:51:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:48.107 23:51:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:48.107 23:51:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:48.107 23:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.107 23:51:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:48.107 23:51:36 -- pm/common@44 -- $ pid=1597170 00:25:48.107 23:51:36 -- pm/common@50 -- $ kill -TERM 1597170 00:25:48.107 23:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.107 23:51:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:48.107 23:51:36 -- pm/common@44 -- $ pid=1597171 00:25:48.107 23:51:36 -- pm/common@50 -- $ kill -TERM 1597171 00:25:48.107 23:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.107 23:51:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:48.107 23:51:36 -- pm/common@44 -- $ pid=1597173 00:25:48.107 23:51:36 -- pm/common@50 -- $ kill -TERM 1597173 00:25:48.107 23:51:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:48.107 23:51:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:48.107 23:51:36 -- pm/common@44 -- $ pid=1597198 00:25:48.107 23:51:36 -- pm/common@50 -- $ sudo -E kill -TERM 1597198 00:25:48.107 + [[ -n 1184331 ]] 00:25:48.107 + sudo kill 1184331 00:25:48.116 [Pipeline] } 00:25:48.136 [Pipeline] // stage 00:25:48.170 [Pipeline] } 00:25:48.192 [Pipeline] // timeout 00:25:48.198 [Pipeline] } 00:25:48.215 [Pipeline] // catchError 00:25:48.223 [Pipeline] } 00:25:48.244 [Pipeline] // wrap 00:25:48.251 [Pipeline] } 00:25:48.268 [Pipeline] // catchError 00:25:48.279 [Pipeline] stage 00:25:48.281 [Pipeline] { (Epilogue) 00:25:48.298 [Pipeline] catchError 00:25:48.300 [Pipeline] { 00:25:48.315 [Pipeline] echo 00:25:48.317 Cleanup processes 00:25:48.324 [Pipeline] sh 00:25:48.711 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:48.711 1597350 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:48.711 1597669 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:48.751 [Pipeline] sh 00:25:49.035 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:49.035 ++ grep -v 'sudo pgrep' 00:25:49.035 ++ awk '{print $1}' 00:25:49.035 + sudo kill -9 1597350 00:25:49.049 [Pipeline] sh 00:25:49.331 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:57.454 [Pipeline] sh 00:25:57.735 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:57.736 Artifacts sizes are good 00:25:57.749 [Pipeline] archiveArtifacts 00:25:57.756 Archiving artifacts 00:25:57.888 [Pipeline] sh 00:25:58.171 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:25:58.187 [Pipeline] cleanWs 00:25:58.197 [WS-CLEANUP] Deleting project workspace... 00:25:58.197 [WS-CLEANUP] Deferred wipeout is used... 00:25:58.204 [WS-CLEANUP] done 00:25:58.205 [Pipeline] } 00:25:58.226 [Pipeline] // catchError 00:25:58.240 [Pipeline] sh 00:25:58.519 + logger -p user.info -t JENKINS-CI 00:25:58.529 [Pipeline] } 00:25:58.545 [Pipeline] // stage 00:25:58.550 [Pipeline] } 00:25:58.568 [Pipeline] // node 00:25:58.574 [Pipeline] End of Pipeline 00:25:58.611 Finished: SUCCESS